Descriptive vs. Hypothesis-driven, part II

You people are writing my posts for me. On my recent post about descriptive vs. hypothesis driven research commenter Whimple picked up what I thought was a fairly dead thread… and said (among other things)…

“Is the difference between a descriptive hypothesis and a mechanistic hypothesis that the mechanistic hypothesis is falsifiable?”

Regular and thoughtful commenter DSKS replies…

A hypothesis must be falsifiable period, whether broad and weakly-focused (omics, “fishing expeditions” and so forth) or narrow and explicit. I think the latter is what is often referred to as “mechanistic” science, even though all science is ultimately directed towards establishing cause.

I think the current brouhaha about “descriptive science” conflates two separate issues best described by the following examples:

1) The serendipitous discovery: The investigator is testing her prediction that Treatment A will increase Variable X. During analysis, however, she discovers a trend indicating that Treatment A also appears to have changed Variable Y.

2) The fishing expedition: The investigator enters a fledgling research field regarding a new disease for which little is known of the cause. He proposes to do a genetic screen to see whether the disease is simply a result of a genetic defect. He finds that expression of genes X, Y and Z is deficient in all patients with the disease, but no patients in the control group.

(1) is hypothesis-free and (2) is hypothesis-driven. Both are clearly hypothesis-generating. Under current standards, (1) cannot be published without at least designing new experiments to test the hypothesis that Treatment A affects Variable Y (only on this subsequent data can statistics be applied; assessments of probability post hoc are invalid). (2) Satisfies the hypothetico-deductive model completely, and the issue determining publication will be the impact of the work and the reliability of the methodology.

Very clever that. But here are the random things that bug me about these two types of hypotheses. In the case of #1, will the investigator notice the unexpected effect if it is not within their hypothesis? I’d love to think that we always do- but do we bias ourselves in what we look at during the experiment that we are using to test a narrow mechanistic hypothesis.  Is the investigator in case 1 likely to notice the unexpected effect OUTSIDE the proposed hypothesis… can, and do, we bias ourselves by this kind of thinking?

Second- still in regard to #1- how likely is it that the investigator does what DSKS proposes (design new experiments to test a new hypothesis)… and doesn’t just ‘adjust’ the original hypothesis and hammer out a paper. I’d love to think that’s not the way things work… but am I hopelessly naive?

And finally- on #2- As DSKS points out screens ARE hypothesis driven, just not the same flavor of hypothesis used in case #1… and maybe better… since it would be tough to bias the screen by having an idea of the outcome in your head in advance. Of course, screens can be screwed up too- and I’m not meaning to suggest that they are completely without bias… surely not… you get what you ask for in a screen.

Advertisement

10 thoughts on “Descriptive vs. Hypothesis-driven, part II

  1. I’ve had trouble getting a straight answer to what “descriptive” means, and why it’s bad. Today I had an interesting discussion with a senior colleague at a different institution who suggested that “descriptive” is what you put in the comments to the grants in your pile that don’t survive your first ranking cut (which is to say, all of them except one). In other words, if the grant isn’t good enough, but you can’t clearly define why, you can just say the studies seem to be descriptive in nature. Could be code for anything: uninteresting, insufficiently justified, too speculative, whatever.

    That being said, I’ve been trying to define “descriptive” by trying to figure out what the opposite of “descriptive” is. One thought is that the opposite of “descriptive” is “explanatory” in this fashion:

    Explanatory (mechanistic?):
    1) make an observation (goes in preliminary data)
    2) construct a testable hypothesis (educated guess) that explains what the underlying biology is that causes the observation (the mechanism)
    3) design experiments that can provide evidence to either support or refute your explanation

    Descriptive:
    1) experiments that will let you make new observations, i.e.: generate preliminary data, no matter how fascinating those observations may prove to be

    I don’t think “hypothesis-driven” is the opposite of “descriptive” per se, because “hypothesis” is so vague as to be meaningless. For example, “we hypothesize that doing these experiments will reveal the ultimate secrets of the universe to us.”

    Unclear in the above description of “explanatory” is how you go about making these preliminary observations in the first place. It seems clear though that what you don’t do is try to convince the NIH study sections to fund your experiments to make these observations, because in practical terms the cash is going to people that have already made these preliminary observations already. :/

  2. The whole thing bugs the bejeezus out of me because I have not yet done any research that was planned, per se. We stumbled on some tools, and they were useful, and we figured out a way to make them into a story. Then we collected some data looking for an expected difference between two groups, but instead got data showing that the two groups were identical–but vastly different from the accepted How Things Work. Another paper. Then we happened to have a tech notice a neat thing, and turned that into another paper.

    I don’t know how to classify any of these experiments, but “hypothesis-driven” is surely not it. It was a matter of having an attuned and open mind while playing around with a few things, most of which didn’t work.

  3. Whimple-

    In other words, if the grant isn’t good enough, but you can’t clearly define why, you can just say the studies seem to be descriptive in nature.

    If a reviewer can’t clearly define why a grant isn’t good enough- then maybe this is the reviewer’s problem, not the writer’s problem. Covering this up with ‘this work is descriptive in nature’ is a cop out to not admitting that something is good enough because a reviewer can’t find anything wrong with it. If a reviewer feels that a grant is uninteresting (… because), insufficiently justified (… because), or too speculative (… because)- they should say so and have concrete examples (for why they make these remarks)- I’m sick and tired of all the ‘code’ bullshit. I mean DAMN- I want to do science not spend my precious time trying to guess what is being hinted at by some reviewer who can’t think or write clearly on a certain subject. This little rant isn’t directed at you BTW, this subject just gets me going sometimes…

    Yeah- I agree with you that mechanistic vs. descriptive is better terminology than using hypothesis driven- but I get in a fix here:

    It seems clear though that what you don’t do is try to convince the NIH study sections to fund your experiments to make these observations, because in practical terms the cash is going to people that have already made these preliminary observations already.

    Why do I have a problem here? Because sometimes pioneering a new area with new technology cannot be done on the cheap, sacrificing one’s salary, begging and borrowing here and there to scrape together a little $$ to gather the preliminary data… and even if you have the thing 1/4 done on the cheap (i.e. you have the preliminary data)… there is no mechanism to my knowledge to push such studies.

  4. Whimple said,
    “For example, “we hypothesize that doing these experiments will reveal the ultimate secrets of the universe to us.””

    This is a circular hypothesis that is not falsifiable (to know that all of the ultimate secrets had been revealed would require prior knowledge regarding the ultimate secrets).

    Dr. J said
    “Then we collected some data looking for an expected difference between two groups, but instead got data showing that the two groups were identical–but vastly different from the accepted How Things Work.”

    A prediction reveals an implicit hypothesis (and falsifiable) if not an explicit one.

    “and maybe better… since it would be tough to bias the screen by having an idea of the outcome in your head in advance.”

    Well, the point of the hypothesis-driven method is to bias one’s attention towards the variable(s) under investigation, though. That’s how you can apply probability to them meaningfully. i.e. the chances that the variable you are examining is going to return a result that is a false positive is much smaller (and pre-definable) than the probability that any one of a hundred other variables you’re not focusing on might give false positives or negatives.

    I see your point about being ‘open minded’ though, as it pays dividends to look at all of the information available from a piece of data regardless of whether it is directly related to the hypothesis. That’s how new observations are made and new avenues of investigation pursued.

    “Second- still in regard to #1- how likely is it that the investigator does what DSKS proposes (design new experiments to test a new hypothesis)… and doesn’t just ‘adjust’ the original hypothesis and hammer out a paper. I’d love to think that’s not the way things work… but am I hopelessly naive?”

    Scientifically this is improper, but sometimes pragmatism probably wins if a given experiment is simply too expensive or time-consuming to repeat. Some might argue that fudging a hypothesis to fit the data is just ‘good rucking’. Given that people are often wrong, and that science is inherently self-policing, there’s possibly a case to be made for that approach; if you’re wrong, somebody else will figure it out so no harm done (although, false information can end up being expensive and time-consuming for somebody else). All I would say is that resting a conclusion on such data puts you in the position of being more likely to be wrong than if you proceeded according to the established model. It’s up to the investigator to make the best call.

    One thing I’m finding a challenge is imagining a designed experiment without at least an implicit and falsifiable hypothesis. When one designs an experiment, surely one has a goal in mind, and surely there aren’t too many steps from that goal to a valid falsifiable hypothesis, no matter how focused or vague?

  5. I don’t know how to classify any of these experiments, but “hypothesis-driven” is surely not it.

    Exactly. The meaningful distinction that started this discussion isn’t between OMG Teh Scientific Method!! and the way research really works; it’s between getting to a story and just dumping a pile of data into a mamuscript.

  6. In the case of #1, will the investigator notice the unexpected effect if it is not within their hypothesis? I’d love to think that we always do- but do we bias ourselves in what we look at during the experiment that we are using to test a narrow mechanistic hypothesis. Is the investigator in case 1 likely to notice the unexpected effect OUTSIDE the proposed hypothesis… can, and do, we bias ourselves by this kind of thinking?Sure, it’s a bias, and no, the investigator is typically not likely to notice the unexpected effect. DSKS has belabored the point, but bias isn’t a bad thing, it’s just focused… that’s what hypotheses are, a focus. Furthermore, I don’t think it’s necessarily inappropriate to just shift the hypothesis to accommodate the new data. Isn’t that just how science works? We make predictions, test them, and then adjust based on the results and what we’ve learned along the way. There are unexpected results all the time, and we reform our hypotheses.

    Although, I have to add that predicting “a genetic basis to disease X, so we’ll do a microarray,” is not a hypotehsis. I would argue that this is actually descriptive science, not hypothesis driven research. It makes no prediction about the results to be obtained. Although, if the array has already been done, then the investigator has a good basis for subsequent hypotheses. Or, if the investigator can make some prediction about the class of genes that should be differentially regulated, then there may be a hypothesis buried in there somewhere.

    As for publishing descriptive science, it can be done but it needs to be really well designed. It is – in my opinion – more difficult to design these experiments in a meaningful and interesting way, but people like Pat Brown at Stanford do it all the time and he’s really good at it. Most structural biology (crystallography) could also be considered descriptive science.

  7. “Although, I have to add that predicting “a genetic basis to disease X, so we’ll do a microarray,” is not a hypotehsis.”

    Why not? If the technique is capable of falsifying the hypothesis, “There is a genetic basis to disease X” then the hypothesis satisfies both criteria of falsifiability and experimental feasibility. Just because it’s vague doesn’t make it any less of a valid hypothesis.

    The hypothesis doesn’t have to be stated in black and white to be real (that’s pedantry); sometimes it is implicit in the design of the experiment. True descriptive science, if we are to take that to mean hypothesis-free, must surely be synonymous with pure serendipity, imho.

    “Most structural biology (crystallography) could also be considered descriptive science.”

    I would argue that a lot of structural biology is heavily hypothesis-driven; both implicitly and explicitly. As far as ion channels are concerned, crystallographic data is presented very much in terms of whether it supports or falsifies prior hypotheses based on functional and molecular biological data (and then the knives come out and much fun ensues).

  8. “Why not? If the technique is capable of falsifying the hypothesis, “There is a genetic basis to disease X” then the hypothesis satisfies both criteria of falsifiability and experimental feasibility. Just because it’s vague doesn’t make it any less of a valid hypothesis.”

    I guess it gets back to what we mean by “hypothesis” and “descriptive.” Practically, I think most people mean that the hypothesis is vague at best. The microarray experiment is a classic example of this common objection. The fact that genes may go up or down in one group isn’t necessarily informative by itself. Even if you run two identical tissue samples on a microarray, you’ll still get hits. Of course, we can use stats to tell us the degree of difference between the samples. My point is that the experiment in itself is not falsifiable unless one makes more detailed predictions about the expected results. That is to say, unless the investigator develops a more detailed hypothesis. That’s why a microarray is commonly regarded as “descriptive science,” or not “hypotehsis driven.” That’s not to say that there isn’t some hypothesis buried in the experiment. I don’t disagree with you, given your definitions. I just don’t think that’s what is commonly meant by “hypothesis driven” and “descriptive science.”

    Point taken about the structural biology. I agree that the majority of the publications are formed with a specific hypothesis in mind. However, in drug development, it is often done in largely a descriptive manner. We would like to know, say, the size and shape of an ATP pocket of an important kinase. There’s no firm hypothesis, but it is an essential experiment that will generate countless hypotheses.

    Could one experiment be more hypothesis driven than another? Maybe we should establish the “hypo-descriptive index” to fairly evaluate the degree to which a proposal answers a question or describes a phenomenon. 😉 Or, maybe it’s just more helpful to say that a hypothesis is “underdeveloped” or “highly speculative.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s