Question
1_ Let € be uniformly distributed o (0,1), and let 7 be & Bernoulli random variable with P(n = 1) =1 P(n = 0) = 1/4 independent of € _ DefineXt = € +nt, 0 <t <1.a) Draw two typical trajectories of the random process {Xt = 0 <t<1}. b) For fixed t € (0,1], find the distribution of Xt: Find the probability P(the trajectory of Xt; 0 < t < 1, is strictly increasing) _
1_ Let € be uniformly distributed o (0,1), and let 7 be & Bernoulli random variable with P(n = 1) =1 P(n = 0) = 1/4 independent of € _ Define Xt = € +nt, 0 <t <1. a) Draw two typical trajectories of the random process {Xt = 0 <t<1}. b) For fixed t € (0,1], find the distribution of Xt: Find the probability P(the trajectory of Xt; 0 < t < 1, is strictly increasing) _


Answers
Let $Y_{1}, Y_{2}, \ldots, Y_{n}$ denote a random sample from the density function given by $$f(y | \theta)=\left\{\begin{array}{ll}\left(\frac{1}{\theta}\right) r y^{r-1} e^{-y^{\prime} / \theta}, & \theta>0, y>0 \\0, & \text { elsewhere }\end{array}\right.$$ where $r$ is a known positive constant. a. Find a sufficient statistic for $\theta$ b. Find the MLE of $\theta$ c. Is the estimator in part (b) an MVUE for $\theta$ ?
Hello. This is problem 9.83. We're going to deport a so we have a uniformly distributed uh random variable um and it's made up of wise. So firstly we need to find the likelihood, we're trying to find the Emily of data. So the likelihood of data is equal to the product of eyes. You could want to end of one over tooth data plus one. And this is the important part right here. Those brackets. So it's gonna be from zero 12 0 is less than ricotta wise, java, Which is less than or equal to two times data plus one. So this is just given us our domain at least of just one why? But now it's all of the wise. Okay so we're trying to maximize this. So um so we'll do one right? Because just 12 then it's one and then the concepts too, Times Data Plus one. We're going to exponentially ate it so too then um but now uh we need to see well when does this work? So we could just write down, okay I put a one here. And so this only works when zero is less than or equal to wise of them. She lives in our equal to two Times Data Plus one. Um So then you must be thinking what's going on here? So you want uh the numerator to be the biggest and then the biggest that he could be is if you get the maximum of the wise which is this was then um But we're not done yet so it's not wiser than the biggest. We need to do a little algebra. Uh Because what we're trying to maximize uh data. So from here we could just write down the same thing one over two times data plus 1 to the end. And then what is this called indicator function? So if you were to um subtract one and divide by two on both sides, you get negative. Uh huh. Is less than or equal to worry. So then mine is one give other way too which is less than or equal to data. So this right here would be the biggest event. So now we could just write down the Emily. So Emily of data is theater. Uh huh. Which is equal to wise event. Maya is one all over to. Okay now we need to find the Emily of the variance. So her p we know that the variants. Why Is he grew one or 12 times two data plus one square by definition and the Emily uh variants. So why is so one of the toe stays the same? Is there for me too? Times whatever data is. So instead of the theater we write down the theater. Huh Too multiplied by wise event one is one divided by two plus one. We're going to square this simplifying. We'll get wiser men squared or 12
Now here, in this problem we would like to derive the expected value of one over why? For a geometric probability distribution For a geometric we know the probability of Y is equal to Q to the Y -1. Sometimes pete. And so this means that the expected value of one over why Is equal to the sum from UAE called 0 to Infinity of one. Over why times probability here, sometimes Q The Y -1 times Pete. We can break apart that Q to the Y -1 and we can make this this some From why it was 0 to Infinity of one over why I'm secured at the Y. I'm cuter than a uniform times P. We can pull out the key to the negative one times P. And so this is P over Q Times of some from y equals 0 to infinity. And then we're left with one over white times cuter the wines. That's cute of the Y over why? Now they give us a hint here. And so it's been use of that hint. They told us that the some from Y equals 02 infinity of Q. To the Y. Over why Is able to the negative natural log of 1- wine. And so this is the negative natural log. I'm sorry, the phone line is cute. Okay, so just make use of the hint that they gave us. And so this gives us negative P over Q. The national honor Of 1 -6.