JRC,
Good question about a challenging problem! You did a good job identifying that the point at issue here involves whether the data in the survey backs up the author's analysis when subjected to scrutiny. The challenge in this question involves the ease with which the test-writers can create attractive incorrect answers by manipulating the data to little effect, as with Answer Choice (B) as you noted.
To develop this point further, you should note that (as with most problems), there are two distinct approaches to getting this right, both of which work with each other as well.
- Determine exactly how we could directly weaken the argument, and come up with a strong prephrase to this effect.
- Get a general sense of what the flaw is, and rule out incorrect answers that do not effectively exploit this flaw.
In your case, you have taken the second approach here. Your approach is valid and a strong way to get to the solution, especially when problems present difficulty. Let's work on improving our ability to spot what's wrong in the incorrect answers and to improve our ability to predict the correct answer.
First, zero in on the distinction between the two groups surveyed. What is similar and dissimilar? While we don't know the baseline sample size of each group, we are given a comparison between percentages. In other words, the data includes a percentage who thought the treatment "made things a lot better" for both categories.
Now, certainly, given no other salient differences between these groups other than how long they received treatment, there remain a number of ways to attack the validity of the conclusion. For instance:
- The sample size of one or both these groups might be far too small.
- The percent of people in the "over six months" group who found the treatment deleterious might far exceed the same group in the "under six months" group.
- There could be some other flaw in the survey, how it was conducted, etc.
However, there remains a much more powerful and likely way the LSAT will weaken an argument such as this:
- Demonstrate an inherent confounding distinction between these two groups.
In other words, this argument will quite likely turn not on some statistical artifact but rather on whether these groups are actually analogous to begin with. If we could show that the "under six month" group and the "over six month" group are qualitatively different, we would have a powerful and slightly less obvious way to attack this conclusion.
The above issue is precisely what happens with (B) and (C). Answer Choice (B) indicates that those who had received treatment for longer than six months were more likely to respond; however, this distinction is not in and of itself significant enough to weaken this argument very much. Unless the numbers are so disparate or one group is very small, the underlying number of respondents does not in and of itself attack the usefulness of the comparison. (B) is an excellent trap answer because it invites you to bring in some additional assumptions: i.e. that the under six-month group might have been very small. Well it might have been, or it might not have been. Who knows?
That's where (C) shines. It presents a direct qualitative distinction between these two groups that straight-up harms the conclusion. The latter group is a self-selected sample, likely in large part a subset of members of the under six month group who found therapy helpful and continued. Thus, if we had 1000 members in the under six month group, 200 of whom found therapy helpful, the over six month group might have consisted of only 560 people, with the same 200 still finding therapy helpful. We have shifted from 20% to 36% without any increase in the number of people who find therapy helpful. Those who didn't find it helpful dropped out!
For another problem that tests a similar concept, compare PrepTest 8, June 1993, LR Section 1, Question 13.