9.8. First We know that. Why one why two? All the way to. Oh yeah. Are distributed as a person random variable with me in lambda. And they're also I. D. Which means there independent and identically this rude for part A. We want to find Emily for longer. So for the first step we have this is a notation, its capital L. Of lander. So it's just going to be because we're trying to find the maximum likelihood or doing the likelihood right now and then inside the plant series is going to be the parameter which in this case is the land though and then it's going to be equal to the product. Um So it means that you're just multiplying a lot of things And you're going to go from I. is equal to one to end. And then after that we write down the probably distribution function. But this is going to be different. So first you'll do lambda and it's usually to the white. But now it's going to be to the wise of I why is it why? Because there's a lot of them and then multiplied by E. To a native lambda. And then we're going to divide it by wise of I. And Factorial. So that's the first step. Now we need to simplify things. So this is going to equal to the product from i. Is equal one to end. And then we could distribute the product on the factorial. So it's going to be one over wise of I factory. And then you're going to multiply. Whoa, Let's first right? Um with his work with the lambda. So you're gonna do lambda, right? And then you're adding up a lot of why? So, lambda to the summation. Well, the wise arrived From I is equal to 1 to end. And then we're going to distribute the product again and now to the alternative landau. But then since it's just a constant right into in Orlando, so it's just going to be e to the negative and lambda. All right. So this is the first step then the second step. We need to take the natural lovable science. So the natural log uh l of land, I would like to get her one is equal to the natural log. So over the first part of the natural log of the product of uh equal 1 to end of juan over wise I factorial like this. And then we're going to this room so that it's going to be plus the summation of I secret of one to end of ways of I. And then the natural log Atlanta. And then minus. And what? Okay. So what I have to say though is um you must be saying wait. So what happened right here? Right. So since we have we're using one of logarithms rules. So it will be natural love of land to the summation of Isaac didn't want to the and wiser I but since this right here is an exponent. Yes. Right here is an explanation then you could just bring it down to be here and then keep the natural level Philando. Okay, so there's that part afterwards we need to take the derivative of the natural love of the likelihood of lambda with respect to lander. Yeah. So gonna take the partial derivative or the derivative of natural law of the likelihood of lying there with respect to Miranda. So where you see a lambda uh that's the only part that's gonna be available, the rest is a constant. So for example this part it's just a constant. So the derivative of a constant respect to a variable is just we're just adorable. Constant is just zero. So we're going to focus on this part where yes the land us part and the part. So um we'll get summation Isaac All to 1 to the end of. Why is it Why? So this part is a constant. Uh But then the drift of natural log land respectful and you just bring the land down. Um The thing is that since we're trying to find the maximum likelihood instead of running land up we need to use a hat now. So we're going to write lambda hat. It's the same lander though. And then minus mm When I done though, after this we need a salary equals zero. And from there we need to solve for language. If you do that, you'll get an is equal to the summation of as you could want to end or why survive divided by land. Hat. And then solving for longer hat Yeah. Land uh huh is equal to the summation of uh Cipro one to end. The voice of I over and which is white boy. So this was part A. Okay, okay. For part B. When you find the expected value of land a hat, so expected value of land a hat is equal to the expected value. Why bar? Which is equal to So now instead of running white Bar, I'm going to use this right here, which is the same thing. So we're going to get this information, the eyes, you couldn't want to end. Oh the expected value of y survive all over. And so the expected value goes inside of the summation. And since the expected value of each, why is lambda each time? So and you're adding them and so you're adding lambda. End times you could just write down and land and we're going to do right by and so we end up getting just land now for the variants of lambda hat same thing, the variance of white bar which is equal to the variant ah The Summation of Isaac 01 to end who I survive over. And so since this and is a constant right here, we could just take it out. But when you have a uh the various of constant with them, let's say another variable. You get squared. So you just get one over and square and the rest summation. Both i is equal from one to end of the variance of wise. If I So the variance right, he wouldn't started the summation sign. So then we know that the variance of y survive. It's lambda. So the same thing we're adding and lambdas. So yeah, and times lambda over and square simplifying. We get lambda. All right. Now for part C. We want to make sure that mm. Why Bar is a consistent estimator of lambda? So there's different steps for this. Well let's just review a little bit more work done. So part C. From part B. Yeah, but the expected value of white bar is equal to land right seen this before. And we also know that the variance of why bar is equal to land the overhand. We notice, okay now cars, why bar is unbiased Orlando? So then you're saying, oh what is unbiased? Right, Some bias means that um the estimator that you have. So that that the expected value of the estimated that you that you got uh matches the mean of the random variable which in this case it does right here. That's unbiased. So we have them. Yeah, not just that. And we know that the variance. A white bar yeah, goes 20 S and approaches infinity. So we know this is true. So if you take the limit as n approaches to infinity of this part right here, you'll see that you'll get linda over infinity. So that will That would get close to zero. Okay, because of this. So because white bar is advice for lambda and the variance of white bar goes to zero as it approaches to infinity, it follows that white bond is a consistent estimator of laughter. No moving on to party. We want to find the Emily four. The probability the Y. is secret of zero which is equal to the native lambdas. There's a poor son running variable. Where Why is it with 0? Well, we could just use what we've got before. We are given linda has his secret away bro. So this is after all of the work that we've done. We know this so damn. Really? Uh probability Why secret is zero which is equal due to the negative lambda is just Mhm. So this is what happened. So when you're trying to find the Emily of a function that includes, for example, in this case we found the Emily for lambda, right? And we found that he was that he was just white bar, which we called Atlanta. Hat. So if so you know, function you're trying to find a middle of a function of this parameter letter lambda. Then the Emily is just going to be, for example, in this case is E to the negative of the estimated that we found which is a white bar.