5

4. (30 marks) Consider the model with one exogenous covariate I:Br ify > 0 otherwisewhere y" is & latent (i.e: unobserved) variable and and € are ...

Question

4. (30 marks) Consider the model with one exogenous covariate I:Br ify > 0 otherwisewhere y" is & latent (i.e: unobserved) variable and and € are independently distributed.Show that the above model is the probit model when follows the standard normal distribution (i.e. it yields the same distribution of ylz as the probit model). marks)Suppose now that we have an iid sample (y;, I,); 1 Conduct Monte- Carlo experiment to study:(a) The properties of the probit estimator of 3 when foll

4. (30 marks) Consider the model with one exogenous covariate I: Br ify > 0 otherwise where y" is & latent (i.e: unobserved) variable and and € are independently distributed. Show that the above model is the probit model when follows the standard normal distribution (i.e. it yields the same distribution of ylz as the probit model). marks) Suppose now that we have an iid sample (y;, I,); 1 Conduct Monte- Carlo experiment to study: (a) The properties of the probit estimator of 3 when follows the standard normal distribution: Explain your results. (10 marks) The properties of the probit estimator o 8 When follows the standard uniform distribution_ Explain your results. (10 marks) Throughout your experiment use the parameter values N = 100, 8 = and suppose that I follows normal distribution with mean 0 and variance [ (iii) Repeat part (ii) with N results 5 marks _ 10, 000. Explain any change(s) in your



Answers

Use the data in CEOSAL2 to answer this question.
(i) Estimate the model
lsalary $=\beta_{0}+\beta_{1}$ lsales $+\beta_{2}$ lmktral $+\beta_{3}$ ceoten $+\beta_{4}$ceoten$^{2}+u$
by OLS using all of the observations, where lsalary, lsales, and lmktvale are all natural
logarithms. Report the results in the usual form with the usual OLS standard errors. (You may
verify that the heteroskedasticity-robust standard errors are similar.)
(ii) In the regression from part (i) obtain the studentized residuals; call these stri. How many
studentized residuals are above 1.96 in absolute value? If the studentized residuals were
independent draws from a standard normal distribution, about how many would you expect to
be above two in absolute value with 177 draws?
(iii) Reestimate the equation in part (i) by OLS using only the observations with $\left|s t r_{i}\right| \leq 1.96 .$ How do the coefficients compare with those in part (i)?
(iv) Estimate the equation in part (i) by LAD, using all of the data. Is the estimate of $\beta_{1}$ closer to the OLS estimate using the full sample or the restricted sample? What about for $\beta_{3} ?$
(v) Evaluate the following statement: "Dropping outliers based on extreme values of studentized
residuals makes the resulting OLS estimates closer to the LAD estimates on the full sample."

Part one. We get the old l s residuals u T hat. Then we run their regression with U T hat on its first leg. The coefficient on U T had minus one is row het and it is 0.281 with a standard error of 0.94 This produced a T statistic of 2.99 So there is evidence for serial correlation in the errors. Yeah. Yeah, this test required strict XO gen ity and we can make a case that all explanatory variables are strictly exhaustion is for the dummies such as, Ah, seasonal dummy and we can dummies. And also the time trend. They must be exogenous because they are determined by the calendar. Yeah, For statewide unemployment rate, it is safe to assume that unexplained changes in P R C F 80 the dependent variable today do not cause future changes. Instead, wide unemployment rate human makes Yeah, and lastly for the two policy variables speed limit law and sit Bell Law, it's reasonable. Thio assume these variables to be strictly exhaustion ists because over this period the policy changes were permanent once they occurred. Part two, we are still estimating the Betas by old L s. But we're computing different standard errors that have some raw business to serial correlation. The Beta head of speed law is 0.6 71 with a standard error of 0.267 and the Beta head of seat belt law is minus 0.2 95 and the standard error is 950.331 Compared to the old L as standard error and T statistic, you may find that the T statistic four speed law has fallen to about 2.5. But the variable is still significant. Yeah. For seat belt law, the T statistic is less than one in absolute value. So given the new computation of standard error, we find little evidence that sit Bella had an effect on the percent of, um, accidents, resulting in fatality for three. This is the estimates using P W method, and I skip the results for the Tan Trinh and the monthly Dummies. Row Hot is 0.289 You may find that there is no important changes. Both policy variable coefficients get closer to zero, and the standard heroines are bigger than the incorrect old l s standard errors, so the basic conclusion is the same. The increase in the speed limit appeared to increase PR cf 80 but the seat belt law, why it is estimated to decrease PR CF 80 does not have a statistically significant effect.

Computer exercise Number eight is off somewhat different nature than what we've been solving so far. Namely, because we're not gonna need any data set. But instead, we're going to generate our own variables. So, uh, I will be using state. As always, it's very easy to, uh, generate specific kinds of random variables. But of course, you feel free to use our python or mass level or anything else that you want. So in part one, we need to start by generating 500 observations of explanatory variable X from the inform distribution with support 0 to 10. Now, as the author mentions, most of disco packages have a command for the unit from 01 standard uniform, and that wench is multiply the observation by 10. So even though I could do it and stay that way, I will be doing that way because it's very probable that you have to do that way too. Right. So, first of all, we're gonna set the number of observation people 500. Okay. And now we're gonna generate random variable called X one equals uniform. This is the command for the infirm 01 And when the multiplied by all right. You go day a browser. You can see that Indeed we have. You know, five hundreds of patients of of this random variable. And now we're being asked to calculate what is the sample mean in the sample standard deviation. But before we do it, let's see what we expected to be. In other words, one of the theoretical mean and side invasion of the uniforms. Every 10 distribution Do you remember you are in the ground stats. The expected value of a uniform variable on Ah, um defined illness on ah, interval from A to B A D notice lower range of the support be the lives range the lower part of the support of the highest, um bound during. So the expected value would be a plus b divided by to hear a ghost. Zero b was 10. So expected values five. The variants, on the other hand, is defined as B minus a squared, divided by 12. Here it would plug in our numbers. We have 8.333 Understand? Division, of course, is the square root of that. So it's 2.87 around 2.89 So we expect to find them sample mean close to five, and the sample standard deviations close to 8.29. Let's see what happened to we summarize you are. Thanks. One run available. Yeah, well, you see that the Semple mean is not too far away from five. Of course, it's not exactly equal to five. I'm going to discuss why slightly less assemble congregation was Also it's like lesson 2.89. Uh, here, that's what's very interesting. Is that that meaning the max is not zero in 10. Who's of course, each number Israel number has a probability zero occurring. And the whole idea is that you might be tempted to think. Okay, we generated this random variable to the population. Is that the one we have here? But no, the population is every uniforms, everything random variable ever generated. It's, ah, population infinite size. Right, So this is just sub sample off 500 observations. It's a random sample. Yeah, admittedly, but as it always happen with random samples, we need a very large size to be able to say that you know are mean and salivation are very, very close to the deadline population. Indeed, we need to invoke week love large numbers. But here the random sample is not that large. So that's why we have this deviations around the, uh the theoretical bodies. All right. In part two, we degenerate now 500 errors. You y from the normal 0 36 as we did before. We're gonna generate Ah, it's called you. One will generate a normal I stand a normal variable. Sorry. I told you no are normal. There's a standard Normal. Very well. And we're gonna multiply it by six to get ah, in normal 0 36 time variable, right? 36 is a very in sexist, A deviation, no off course. We expect to see the available to have a mean zero and very insourced innovation of six. Let's see what happens. It was somewhere I said Yeah. Again. Mmm. Well, the meaning is no favorite. Close to zero this time. Deviation. On the other hand, it's quite a you know, it's above six 6.54. Again. This is because of a random of a small sample problem. If this, uh, if the number of observation where 10,000, 100,000 or 1,000,000,000 that would be I would converge and probability to the theoretical values. And indeed the sample mean would observe would be even closer to zero extremely zero incentive Asian with extreme 26. And I promise that by the end of the video, I will do this whole thing with with a huge number of observation to see what happens. But just out of curiosity, let's produce the hist a gram of this, uh, you variable and we'll superimpose a normal density. You see how close it is? Well, you see, not too far, but again, we have gaps here. There's, um, frequents here. The higher than they should be. You know, it just it's just approximately normal, but we'll see later around. But if I generate enough friend of variables, it will be almost identical with critical density right now in part. See, we need to generate the why variable as foes. Why one equals one plus two time 61 one. What do you want, right? Yeah. Yes. Here it is. 500 values. Why one r generated. Now we're gonna run a very simple regression to see what's gonna happen. The grace Why one on X one. Remember? Now I'm gonna do it. I'm gonna make a mistake deliberately. If I also grows the era term. Of course, we're gonna get this, you know, R squared off one perfect coefficient knows there because this is the actual data generating process. That's what I want to do. We wanna, um, regards justice deterministic part. The X one explains every variable in the U one will be our era terms. So what we expect to get, it's ah is a constant off one about an estimate for the constant equal. The one estimate of the slope coefficient equals two. And in our square, there is like we're on 50% 0.5 because with deterministic parties have the very was included. Let's see. Yeah, well, not too bad. But all right, Percival, right. Number of observations. The joint F test is extremely statistically significant. They are square. It is. It's your 0.46 three again, not exactly 50% as we were. Theoretically, I expect to see now our estimate for the constant is no. One. By any chance, it's 0.321 and it's not statistically significant, meaning the underlying T test says it is not sustainable. statistically significantly, it's basically significant different from zero. So it's like estimating a zero here, in any case, even anywhere so disclosing if everyone's disclose significant, we're talking about an estimate that is far from one and our slow coefficient. It is very statistically significant. Look at the high T stat zero p value, and it's quite close to, but it's 2.13. It's not exactly two, so the value of cities within its not even within extend their air. All right, so from this analysis wouldn't conclude that, uh, the 100 line populations to enter would definitely not conclude that the other lion population parameter is one for the constant. Now, why is this happening? The first thing you need to remember is that we know what we're not dealing with the underlying population within with sub sample. And the sub sample here is not zero mean six Tyra Deviation. And the other thing is not a five mean 2.9 division. So these discrepancies of the sample from the underlying population for the variables for the X one you want can definitely account for these two discrepancies here, right? Because the the variables, the random variables that we have, as we saw the means inside division of less, at least in this case or less. And here the means less with integration is more. It's more dispersed. So this differences will be incorporated into our estimates for the constant. And this is why it will not be precise. It will be actually biased Dan Awards in this case, not biased in tradition. Nonsense. But from a computational for interview. This way we get this, uh, flood estimates And this, you know, not exactly flawed here, but more inaccurate vestments again, if we do this whole analysis with exactly the same thing. But with a sample size equals two people thio 2,000,000,000 we would get the extreme. It would get an estimate of 211 almost guarantee you okay now in part for we need to obtain the or less residuals you had and verify the equation to 60 Holds subject of rounding error, uh, remind. Let's remember that the equation to six is this one I copied from the book. It says the dishonorable OLS residuals. Remember, we're not talking about the the equation errors or disturbance. We're talking about the estimated or less residuals. There's some will be equal to zero and equivalently the sum of the product of Excite and your listeners. Will we be equal to zero again? No, this is not a restriction we imposed. This is something that comes from the optimization from the OLS from solving the or less problem for minimizing the squad residuals. This comes from the first order conditions. So this will always hold no matter what. Even if we run a nonsense regrets regression, this will fall because this is how the or less estimates are designed. Be obtained right now. If you think about it, More mathematical terms in terms of linear algebra. What is this thing here? Say it says that the inner product off X and you had physical to zero if it probably is equal to zero, this means the two vectors are perfectly killer to each other. Okay, they form at night AA degree angle. And if we're talking about random variables like here, it means they are that the end line Victor's linearly independent and hence the round that random variables will be independent. Right? And I'm correlated in this case. Okay, In this case, let's say uncork related. Let's not, uh, let's not devolve into the difference, but let's say in court later. So the correlation. First of all, let's obtain the or less residuals would do it by predict the name. But those names, because you've had my color, is, is okay. Come on, residuals. Now, the the factor resists is the s main residuals You won't have. Let's see. Gonna generate a new variable called some someone the same equal to the sum of the residuals. All right, so this thing here, someone will be variable. It will be, actually. Ah, a simple number, right? It will be a one times one matrix is equal. Told the residuals hit Enter and we could go there. Browser and see. It's gonna be all right. Great. There's just one number here. Just ah, entered in every entry. But it's one number, and it is practically zero. Hey, you see this? A notation here. This means that it's something. 0.3 17 Water. This is practically here. Okay, so we've verified this claim and the next one let's do again, was called this, um, somewhere less. Now we're gonna do define available, which is a sum of X one times ises, right? That's what we want. And let's see if this is also zero. Yes, that's also zero, you know, up to the fourth. That small point. Of course, this is practically zero. We're talking about rounding things here. In a way, I don't have to really explain what's going on, but this is zero up to the four. This one point. All right, now compute the same quantities as an equation to six. But using the heirs now in place of the residuals. Well, let's think about a four minute. Do we really? We said that those equations here, those two conditions here hold by definition, it's an algebraic fact that comes from the first of the conditions of solving the oil s problem. The minimization squared residuals Do those conditions need to hold in for the population? I mean, what would that mean? Let's see. Let's see. Let's say Hee jin Uh, no. No. First I was doing some with Leonard. It will be the some off you want and the residuals. Does he have to be zero? No, I mean, is there any reason has to be zero No, it's not. It's minus 48 17. In fact, there's no reason. Absolutely no reason that the sum of errors should be. Zero doesn't come from anywhere we haven't imposed. It is completely random. I think they could be. But this is ah probability zero bank. There's no not necessarily. All right now, if we do the somewhere, there's whatever. It shouldn't be someone by some to let's say some of X one times you want. Does this have to be called a zero? No, absolutely no reason. There's no reason this has to be called zero, but it could be Could potentially beat Will do zero if those variables where independent and handsome correlated. Because this thing right here, uh, divided by one over end mine is the expected values which are, you know, supposed to be zero whatever this is some somehow indicative of the co variance between the two variables. I mean, between X one and you wanna do you have now, if we see that the Corvair Ian's here, it's non zero. Then this, um, quantity will be nuns here. Let's shake it up. Will compute that co Berries matrix. And indeed that coherence is not zero. So there's no reason that this quantity should be zero. All right, remember, this only happens with you, Hades with the estimated or less residuals, by definition, by construction of the of the regression method, I came. Um, now, in part six, we need to repeat parts 112 and three with a new symbol of data starting by generating, you know, whatever exactly the same thing. So let's start from beginning. And this is why I wrote this Sub strips one before because now I'm gonna do the same with X two. So gonna repeat the I would it before, just by changing 1 to 2. Okay, we'll summarize here, x two. And you too. But before Kate, let me just summarize also X one you want. So we have the picture next to each other. Cane somewhere X one you want. And now somewhere Isaacs to you too. So we can look Att Have the whole picture. Okay. Look at that. Not the same. Not at all right. The main is quite different. One is below five. The others above five. Standing ovation here is higher. The min and max different. Everything is different. Isn't that crazy. Well, not really. Because the as we said, the population is every random variable either uniform out normal with this parameters. Evident rated. And now we're extracting different SAB samples of 500 observations. Each doesn't have to be cool. Just random, simply Okay, Now we're gonna define why, too, Generate y two. I was the one plus two weeks to bus you, too. I'm gonna run aggression. Why? It's your next to, but okay, No, Let me first also rerun the previous regression. So we have the whole picture here. Okay, So I'm just rerunning the previous aggression I'm gonna run their aggression with. Thanks to wise, you are next to no. All right. Look at that. Quite different again. The estimates are Well, this is just entirely by chance of the estimates are close to each other, but they're different. Look, that different are square 46% versus 53% different center errors, different values. Everything is different, everything is different. And yet again. Morning. The thing that's, um, similar is that's the disco significance of this low coefficient and the constant term again, it is not likely statistically significant. Uh, this exactly because you know what we said before the mean does not correspond to them theoretical mean and the same goes for San deviation. What are those things different? Well, because we have different brand of samples. This is the question. If we do it 1000 times, gonna get 1000 different estimates. At least you know, they don't have to be dramatically different, but they have to be somehow different unless the two random samples are the same, which is a probability zero event, and now extra bonus part. I'm gonna do the same with a lot of observations, and I want you to see the magic of statistics and of convergence results. If when I always said the observations two 10,000 king instead of 500. So I'm gonna do exactly what I did. But with just a larger random sample. Now let's generate Ah, the eggs to be now it's gonna be with Capital X and you okay and also generates ah, capsule. Why so again, doing exactly what I did before? Exactly Same. But with a larger sub sample. Now look at that. First of all, let's summarize uh, X and you look at that. Those are much better than before. Again. We don't have a theoretical meaningful to five because we need a bigger sample. I'm not going to do with the 1,000,000,000 observation is gonna take some time for you to run. But the estimates that we have here look at the standing ovation is almost, you know, the correct one that send aviation for the Brenda. Very always practically six. And the means of very close to the theoretical quantities. Now, I want you also to look at the history, ma'am, If let's say you won before we came. Now we're talking about the first, the first you one and two. The history. I'm going to superimpose a normal density. Look at that. This is the first variable 500 observations. Uh, you know, not too bad. We do have. As we said before, we did have the gaps here and there. And now let me do the same with the other variable from a larger It's a sample. Look at that. Beautiful, you know, overlaps it so much better. And if I do it with 1,000,000,000 with 10,000,000 it's gonna be almost identical. You're not gonna be able to tell the difference, especially if I reduce the width of the bane of the bins here. And finally, they just run the regression with, Ah, large, uh, with a larger random sample. You see if we're gonna get estimates closer to the theoretical ones. Whoa, Look at that R squared closer to 50 that ever before, Um, and estimates for the slope coefficient very closer to indeed them send to be, uh is very close to doing statistically significant. And now the constant term is Well, no one is one point to get the through values within two San Deviations. But it's the discipline significant. You see, you see the difference. Just, uh, if we take those problems that arise. Here are small sample problems, and the bad news is that in applied research usually is not. We're not able to extract a huge random sample. That and you know, this is why, even if we know the true model, even write about it, we can get the weird estimates just from having a small sample. So remember, sample size matters is extremely, extremely important

This is the result for Part one. We use the full sample, which has 177 observations. From this estimation, we obtain the student ized residuals and we call them as t r. Supply. The number of student ized residuals, which are above 1.96 in absolute value, is nine. If the student ized residuals were independently drawn from a standard normal distribution, we would expect about by percent of our sample. Sir, 177 times 5%. You will get a number between eight and nine. It is 8.56 something we would expect between eight and nine cases of student ties residuals to be above two. It is so because in a standard normal distribution, about 95.5% of the observations are within two standard errors off standard deviation are within two standard deviation. And in a standard normal distribution, the standard deviation is one. So 95.5% of the observation are within two equivalently. It means 5% up to 5% of the observations are either above two or less than minus two. That's why we have the 5% number here and as just say, um, right here to be above two in absolute value, you can check. You can fact check this statement and you will find that there are eight observation with student ties. Residuals above two in absolute value for three. The student ties residuals are used to detect are liars. We will drop. There are liars, which are defined as observation with student ties. Residuals above 1.9 16 absolute value so we wouldn't drop there NYT, cases we find out in Part two, we will re estimate the model in part one again using 169 observations. This is the result. Compare with the regression. In Part one, we find that the main coefficient become significant at the 1% level. Let me come back to part one, so the first one lakh of sales is significant at the 1% level I windy note with three stars. Lakoff M. Kate Evil is significant at the 5% level, so two Stars CEO 10. Thank you significant at the 1% level, and CEO 10 square is significant at the 5% level. Back Thio, Part three, lock of sales is significant at 1% level. Still lock of em Katie Value before it was significant at the 5% level. Now it is significant. At the 1% level, nothing changed in terms of significance level for C E. 0. 10 and for CEO 10 square. Okay, nothing changed. It is still significant at the 5% level. Yeah, so we have beta head of lock of sales Mhm and CEO 10. They have the same level of significance. The exactly value is different, but not too substantial to give them a new level of significance. The estimates on em, Katie Value increase in magnitude and significance level. Okay, You may also notice that the magnitude of the estimates on sales and CEO 10 decrease, but not too much. And the coefficient of CEO 10 square does not change in terms of magnitude. Now we will use least absolute deviation to estimate the regression in part One again, this is part four. We re news all the data Here is the result. The l. A. D method is, um, estimated with with a different, um methodology. So it doesn't have the are square. It is estimated by maximum likelihood So to measure the fit of the model, you will need to look at their results generated by their statistical software. And you will look at the lock likelihood value. I would not report that here because we don't care about the fit of the model in this problem, we care about the estimates of their explanatory variables. Compare this regression with previous regression where we use l s. We see that beta one. The coefficient of lack of cells is closer to that of the restricted sample and the restricted. Simple is the regression. In part three, where we drop the outlier observations. We don't have the same observation for beta three beta three hat. The coefficient of CO 10 is actually closer to the estimate in the full sample. Even these results part five, we will be able to evaluate this statement dropping our lawyers based on extreme values of student ties. Residuals makes the resulting L s estimates closer to the L A G estimates on the phone sample. This statement is not always true. It is not true to every estimate

In this video, we will be testing our hypothesis using information regards to avalanches from Western United States and Canada. So they've been measured, these are slab avalanches and were given some measurements here, 16 of them. As a matter of fact, we're told the population has a mean of 67 were to verify that the mean of our sample is 61.8 and the standard deviation is 10.6. And then we're asked to test at a 1% significance level, which means alpha will be 0.1 And I know I'm going to use the T distribution because I do not know the standard deviation of my population. Now there's a couple of ways to verify X. Bar and s probably the easiest because we are doing a T distribution is indeed to do a T distribution. So what I'm going to do Well actually, maybe, no, I think what we'll do. Sorry, let me get out of this real quick is I'm going to go ahead and I've got my information entered in here so stat edit enter all the information into L. One. Now if I do this t distribution with my information that's in my calculator I do actually get different values than the text book in that I don't round the same way. So I think what we'll do is we'll go ahead and do our calculate our one of our stats. So that was stat over to Kaltag number one, list one cursor down to calculate enter and yes I can verify that the mean is indeed 61.8 and the standard deviation is 10 point six. So again if I use these stats in my calculator it will actually use these decimal places and then it will change the answer a little bit from what our textbook wants. So now to do my T test I'm going to use stat again. But this time I'm going to go over to test number two, select stats. Now remember we are told that are mean is 67 the X. Bar. Now I'm going to use the actual X. Bar. That they gave us. Not the one that's in here. So the X bar they gave us was 61.8, we're going around it, whoops 61.8 and then 10.6 we have 16 items and then we're doing a two tailed test because our we're testing the claim that the mean is actually different. So it's not equal to 67 click to calculate press enter and then we have our tea value, R. P. Value. And again we verified X. Bar. And us. So again in this situation um or we were given expert. I'm sorry. So in this situation I know that I'm using a T. Test, I'm going to round this, so my T value will be negative 1.962 Now if you have to do it by hand, I can show you how to do that. But real quick my sketch will look like this. I'm gonna have to values here, one will be negative 1.9 62 positive 1.962 It's a two tailed test. And then we were told on our calculator, were already told that R p value was zero point approximately 0.686 So if I'm doing this by hand and I have to plug the values in, remember X. Bar was 61.8, mu was 67. The standard deviation or a sample 10.6 over the square root of 16. And when you calculate all of this, you will indeed get a p value like that. And then when I go to my table, what I need to do is I need to look at the degrees of freedom, so remember our degrees of freedom are the stable size minus one, so I go down to 15, I need to estimate where positive, 1.92 would be, so positive, 1.92 is going to fall in between here, and I go up to the top and I'm doing a two tailed test, so I want these two values, so I can say that my p value is estimated between these two numbers, and again, we did find out that our p value was indeed 20.686 which does fall between those two. Now, remember, are null hypothesis was that mu equal 67? Are alternate hypothesis hypothesis, was that mu does not equal 67 and because R. P value is greater than 0.1 so remember R. P value was 0.686 and this is greater than our significance level. That tells us that we are going to fail to reject the no. So at the 1% level of significance, the sample evidence does not support our claim that the average thickness is different, the average thickness of the avalanche.


Similar Solved Questions

5 answers
Find the solution of the Write your ATSWA dilfcrential cquation that satisfies the given Anilin coudition (V1-z explicit function of rsec(y); v(o)
Find the solution of the Write your ATSWA dilfcrential cquation that satisfies the given Anilin coudition (V1-z explicit function of rsec(y); v(o)...
5 answers
An object is moving on the axis_ Take "up to be positive_ The acceleration of the object is given by the following expression where a and are constants 3TThe object has velocity of 18 aT at time t = 4.0 T.use calculus to determine the velocity as function of time_b) What is the velocity of the object at t = 3.0 T in terms of given quantities_ (Give any numerical constant to 2 significant figures) smmplut forn
An object is moving on the axis_ Take "up to be positive_ The acceleration of the object is given by the following expression where a and are constants 3T The object has velocity of 18 aT at time t = 4.0 T. use calculus to determine the velocity as function of time_ b) What is the velocity of t...
5 answers
Two masses, connected by string; are rotating horizontally about an axis with angular speed 0_ Given n;, mz and the length of the rope d, find T; the tension in the rope connecting m; to the axis and T; the tension in the rope connecting m; to m Give an equation for T1 and T2 in terms of W, m, mz, and d.2d ~d- T1 <2
Two masses, connected by string; are rotating horizontally about an axis with angular speed 0_ Given n;, mz and the length of the rope d, find T; the tension in the rope connecting m; to the axis and T; the tension in the rope connecting m; to m Give an equation for T1 and T2 in terms of W, m, mz,...
5 answers
SUBJECTIVE QUESTIONS (6 QUESTIONS)2 kg of a platinum-iridium cylinder has 30 mm in height and 30 mm in diameter. The vclume for Ihe cylinder is V =R 'm. Calculate the density of the cylinder:[3 marks_12 N30050020 NFiGurE 1FIGURE 1 shows three forces 12 N, 8 N,and 20 N Calculate the magnitude of the resultant force, [5 marks]
SUBJECTIVE QUESTIONS (6 QUESTIONS) 2 kg of a platinum-iridium cylinder has 30 mm in height and 30 mm in diameter. The vclume for Ihe cylinder is V =R 'm. Calculate the density of the cylinder: [3 marks_ 12 N 300 500 20 N FiGurE 1 FIGURE 1 shows three forces 12 N, 8 N,and 20 N Calculate the magn...
5 answers
35.8Question 61ptsA builder wants to make a ramp that leads from the ground to the porch of a house to accommodate a wheel chair If the ramp is to be 20 feet long and the porch floor is 4 feet from the ground, what angle will the ramp make with the ground? (round Ito 1 decimal place)Question 71 pts
35.8 Question 6 1pts A builder wants to make a ramp that leads from the ground to the porch of a house to accommodate a wheel chair If the ramp is to be 20 feet long and the porch floor is 4 feet from the ground, what angle will the ramp make with the ground? (round Ito 1 decimal place) Question 7 1...
5 answers
Find the indicated area under Ihe curve 01 Ine standard normal distribulon then convert # to peicentage and fI Ihe blank About of the area botween ? = and ? (or within standard deviation MeaniAboul ol the area between z- and (Round twO decumal places as neededwitnin standard devinlion 0i Ihe mean)
Find the indicated area under Ihe curve 01 Ine standard normal distribulon then convert # to peicentage and fI Ihe blank About of the area botween ? = and ? (or within standard deviation Meani Aboul ol the area between z- and (Round twO decumal places as needed witnin standard devinlion 0i Ihe mean)...
5 answers
The magnetic moment (spin only) of $left[mathrm{NiCl}_{4}ight]^{2-}$ is: $quad[mathbf{2 0 1 1}]$(a) $1.41 mathrm{BM}$(b) $5.64 mathrm{BM}$(c) $1.28 mathrm{BM}$(d) $2.82 mathrm{BM}$
The magnetic moment (spin only) of $left[mathrm{NiCl}_{4} ight]^{2-}$ is: $quad[mathbf{2 0 1 1}]$ (a) $1.41 mathrm{BM}$ (b) $5.64 mathrm{BM}$ (c) $1.28 mathrm{BM}$ (d) $2.82 mathrm{BM}$...
5 answers
Find the exact value of sin(a - B)if sin a =when I a <and tang =_when<8 < I.3418+3427 25~6-4427 256_4v27 25441+6 25None of These
Find the exact value of sin(a - B)if sin a = when I a < and tang =_ when <8 < I. 341 8+3427 25 ~6-4427 25 6_4v27 25 441+6 25 None of These...
5 answers
Decide whether the situation involves permutations or combinations.A sample of 10 items taken from 60 items on an assembly linePermutationCombination
Decide whether the situation involves permutations or combinations. A sample of 10 items taken from 60 items on an assembly line Permutation Combination...
1 answers
In Exercises $51-58$ , use an inverse matrix to solve (if possible) the system of linear equations. $$\left\{\begin{array}{l}{3 x+4 y=-2} \\ {5 x+3 y=4}\end{array}\right.$$
In Exercises $51-58$ , use an inverse matrix to solve (if possible) the system of linear equations. $$\left\{\begin{array}{l}{3 x+4 y=-2} \\ {5 x+3 y=4}\end{array}\right.$$...
5 answers
Q4_(a) Figure shows horizontal wire which is at right angles to magnetic field. The magnetic field produced by horseshoe magnet which is on balance adjusted to read zero when the current the wire is zero_Length of Wire 1 fieldMagnetBalanceFigure 1When the current is 4 A; the reading on the balance is 0.8 g: The length of wire in the magnetic field is 0.05 m_ Calculate the average magnetic flux density along the length of the wire_ (4 marks]
Q4_ (a) Figure shows horizontal wire which is at right angles to magnetic field. The magnetic field produced by horseshoe magnet which is on balance adjusted to read zero when the current the wire is zero_ Length of Wire 1 field Magnet Balance Figure 1 When the current is 4 A; the reading on the bal...
5 answers
7 Find the area of the figure:202670938
7 Find the area of the figure: 20 26 709 38...
5 answers
2cmdLpCRcmLpcGuc
2cm d LpC Rcm Lpc Guc...
5 answers
41. Find the domain of: f(x) =f(+2r+3x' Jc A) [-1/2, 1/2) B) [-1/2.1/2] C) (L) D) (-H] E) [-,4)42, Given p(x) = 24-2 find the interval in which p"(x) convergcs: A) (~,o) B) (0.4] C) (1,3] D) [J) E) [1,3] ELU(-J"t 43 , Find the interval on which the power series converges;A) (-1,7) B) (L5] C) (1,5) D) (1,7] E) (,5]
41. Find the domain of: f(x) =f(+2r+3x' Jc A) [-1/2, 1/2) B) [-1/2.1/2] C) (L) D) (-H] E) [-,4) 42, Given p(x) = 24-2 find the interval in which p"(x) convergcs: A) (~,o) B) (0.4] C) (1,3] D) [J) E) [1,3] ELU(-J"t 43 , Find the interval on which the power series converges; A) (-1,7) B...
5 answers
Question 8 (1 point)Are the two substituents cis, trans or neither?transcisneither
Question 8 (1 point) Are the two substituents cis, trans or neither? trans cis neither...
5 answers
What is/are the major product(s) of this reaction?CH_OH; HtOCH3QCH3enantiomerenantiomer"OCH3enantiomer"OCH}
What is/are the major product(s) of this reaction? CH_OH; Ht OCH3 QCH3 enantiomer enantiomer "OCH3 enantiomer "OCH}...

-- 0.065924--