@writerzt71 I got all of the same answers as you. For (d), I wasn’t sure what to do, so I just used SOCS to explain the distribution like I did on #1 (a).
Can someone please explain what they did for #3 © and (d)?
@writerzt71 I got all of the same answers as you. For (d), I wasn’t sure what to do, so I just used SOCS to explain the distribution like I did on #1 (a).
Can someone please explain what they did for #3 © and (d)?
(d): using the same method above, calculate the conditional probabilities that GIVEN at least one ATM is working (0.85), one, two, or three ATMs are working. That is your conditional probability. Since the question is asking for expected value, multiply the conditional probabilities by their respective number of ATMs. e.g. 3(0.2824). Add all of those values up and you’ll find that your expected value has increased when at least one ATMs is working.
I got this for #6 (d):
Method one describes an SRS. Thus, our formulas for the sampling distributions are in place. Since population mean=mean of sampling distribution of the sample mean, 6 is the mean of the sampling distribution of sample mean tortilla diameters. The standard deviation of a sampling distribution is given as sigma/sqrt(n). The standard deviation of sampling distribution of sample mean tortilla diameters is .11/sqrt(200)=0.0078 inches
By the way, on #5, y=x is best for classification because of the 1:1 aspect ratio but LSRL is best for predictions, right?
@PhilipL 3c: 0.24/0.85=0.282
3d: Greater. No chance of zero ATMs working, therefore more probability is “allotted” to the rest (to maintain that all probabilities add to one). Therefore, the expected value increases.
For Number 6:
a. No
b. Method 1
c. Method 2
d. Approximately normal, mean 6 inches, standard deviation 0.11/(sqrt200)
e. Method 2
f. Method 1
Correct
For catullus101
Okay, who got a slight right skew on #1 for annual employee salary at corporation A?
Wait for the one on ideal sample size did you guys get like 276 people?
Ok these were my answers for #6
A. No
B. Method 1
C. Method 2
D. Normal distribution with Mean = 6, SD = .11/Sqrt(200) = .0078
E. Method 1 - I said that method one would provide less variability because I looked at it from the view of standard deviations with bigger sample sizes… If I were to use 200,000 tortillas rather than 100,000 with Method 2, I would have a lower standard deviation with Method 1 than Method 2… I also incorporated the inference that with more values, you will get closer and closer to the true mean diameter of the tortillas, so I said it was Method 1 (I don’t know If this is correct)
F. Method 1 - Basically almost the same reasoning as above.
For #6 part d can I use the histogram from part b to describe the distribution obtained by method 1?
@jasondinh1
Haha… Didn’t really notice that… The standard deviation is just rounded down from .0078 to .0075, I think they would probably give you credit for that.
Also, what did you think about my decision for part E on the previous page?
I am not sure about your answer basically because the distributions obtained by each method have difference shape ( one is normal while the other is bimodal) In mehod 2, because we only select one production line, the distribution has mean of either 5.9 or 6.1. While in method 1, because we randomly choose 200 data for our sample, we would expect 100 data from each of the production lines. Thus the distribution from this method would have the mean closer to 6.
But does it not make sense that with more data we will see less and less variability throughout the data?
The same thought process follows with trial and error, as we conduct more and more trials we come closer to seeing what something actually is (in this case the true mean of the diameters of the shells) and the trials would be the amount of tortillas sampled.
Let’s apply the concept to this problem in-depth more…
Pretend Method 1 sampled 100 tortillas and that Method 2 sampled 50
If I made a distribution model for both of these, I would clearly be able to tell that the less variability would be with Method 1, as I have more data to compare it to and there wont be as many gaps in the data or skew, it would look more normal. While for Method 2, I don’t have much data and there is a possibility that there will be gaps in the data due to there not being enough data to appropriately follow the normal model.
Using Method 2, you will get either a mean of 5.9 or 6.1
Using Method 1, you will be sampling from both production lines. You will oscillate between a mean of 5.9 and a mean of 6.1, which gives you an overall mean of 6.0.
Therefore, Method 1 homes in on a mean of 6.0 much better than does Method 2.
So are you agreeing with my answer on part E as Method 1 for problem #6?
It’s definitely right that the larger the sample, the less variability of the distribution ( or the closer the sample mean to the true mean. But what i’m trying to say is that the distributions from each method have different sample means. Method 1 gives us a sample mean that between 5.9 and 6.1, and as we increase the sample size, this mean will come closer to 6. Meanwhile, method 2 gives us a sample mean that is roughly about either 5.9 or 6.1, and when we increase the size, this mean will come closer to either 5.9 or 6.1
So… shouldn’t that mean that Method 1 would be better in the long run due to it becoming closer and closer to 6, while with Method 2 we do have the chance to be 5.9 but we also have the chance to be 6.1, which is showing more variability so technically it would be safer to use Method 1 to ensure that we get the least amount of variability…
Just look at the first graph where it shows production line A and B, it could be either of those with Method 2, but with Method 1 we get a more accurate reading.
I think just comparing the sample sizes in this case is not convincing enough. We need to show which method produce a distribution that has the sample mean closer to 6
Yeah, that’s why the answer should be method 1