At least as far as your health is concerned, there is no difference. Politically and economically they are very different so you can decide there.
I'm posting this here because someone on letsrun.com innocently asked about the health of chocolate milk for recovery (which I think is a great source of carbs and protein) which quickly sunk into a HFCS bashing. Someone posted to the "proof" that HFCS is evil or at least eviler than table sugar (sucrose). That link is to a press report of a paper that got alot of attention earlier this year. I was moved enough to review the paper and write something to the letsrun message board. Here is my response, you may be interested
The study referred to in this over-hyped press report is precisely why statistics in the hands of the ignorant creates the anti-science hysteria (anti-AGW, anti-evolution, anti-evidence based medicine) that is rampant on the internet. See also http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-and-medical-science/8269
I will now use this paper in biostats 101 to test the ability of the students to find flaws in a published study. This will be an easy one.
Here are results (end point weight) from the first experiment
1. HFCS 24 hour + chow = 470 +- 7
2. HFCS 12 hour + chow = 502 g +- 11*
3. sucrose 12 hour + chow = 477 +- 9
4. chow only = 462 +- 12
First, why no sucrose 24 hour treatment?
Here is the take-home message from the authors and the press report:
1. the weight gain in the HFCS 12 hour treatment differs from the 12 hour sucrose treatment (no other differences found). Unfortunately, the authors do not actually give us the weight gains, only the table above. From the table we get the curious result that the final weight of the HFCS 24 hour treatment is actually LESS than the sucrose. if HFCS is so bad, why are these rats given 24 hours access to HFCS doing better than sucrose? The authors also do not account for multiple tests (type I error rate). Accounting for type I error rate, the statistical significance of the 12 hour HFCS v. sucrose disappears. Type I error rates is stats 101.
The authors did 2 other experiments, the "6 month" experiments, one on male rats and one on female rats. One of these didn't include a sucrose treatment so we can ignore that (interestingly, the entire paper is about sucrose v. HFCS so what is this even doing in the paper?). In the other, there is a reported difference between the 24 hour HFCS v. sucrose but not the 12 hour HFCS v. sucrose (just the reverse of experiment 1). Again, this reported difference disappears when accounting for type I error rate. What the authors failed to note at all was that the 12-hour HFCS weight gain was actually less than the 12-hour sucrose weight gain (of course this was not significant).
So what do the authors conclude in the discussion?
1. "In Experiment 1 (short-term study, 8 weeks), male rats with
access to HFCS drank less total volume and ingested fewer calories
in the form of HFCS (mean = 18.0 kcal) than the animals with
identical access to a sucrose solution (mean = 27.3 kcal), but the HFCS rats, never the less, became overweight. In these males, both
24-h and 12-h access to HFCS led to increased body weight."
Ah, no. There was no reported difference in the 24 HFCS v. 12 sucrose and even the 12 HFCS v. 12 sucrose is reported incorrectly. If you are going to make the claim that HFCS differs from sucrose, you have to explain why the HFCS 24 hour rats didn't differ.
"In Experiment 2 (long-term study, 6–7 months), HFCS caused an
increase in body weight greater than that of sucrose in both male
and female rats. This increase in body weight was accompanied by
an increase in fat accrual and circulating levels of TG, shows that this
increase in body weight is reﬂective of obesity."
Ah no. The authors didn't even look at the 6 month effects of sucrose in male rats so why do they make this claim? And there is no reported difference in the 12 hour HFCS v. 12 hour sucrose in females so how can they claim the difference. At least in this experiment the 24 hour results make sense, if it existed, which it doesn't.
There are numerous other smaller flaws that aren't worth bothering with given the major flaws in the design, the presentation, and the discussion. Was this paper even reviewed?
I posted this to LRC after someone asked the question:
thought i knew stats wrote:
Not doubting you, but would you mind elaborating what you mean by this? What are the "multiple tests", and how does the type I error rate compound?
That is a really good question. I left it out to keep the post from being too long. A type I error is a "false positive"; basically when the test is telling you there is a difference when in fact none exists.
In experiment one there were 4 treatments and so there are 4X3/2=6 ways to compare the different pairwise combinations (for example HFCS 12 hour v. HFCS 24 hour is one pairwise comparison). We don't have to compare all of these but in this case, all are of interest. So we have "multiple" tests (in this case 6. Whenever we have more than 1 test, the chance of finding a false positive (type I error) goes up (that is the chance of finding something improbable goes up if you go looking multiple times). In the case of 6 tests, our chance of finding a type I error goes from 5% (if that is what we want) to 25%. So there are very, very well known methods to control for this. Really, it is stats 101 and there are too many papers in the literature admonishing researchers when they don't deal with it. Psychology departments are known for rigorous statistics and any psychology professor at Princeton will be well versed in this. I will claim here that the last author is willfully ignoring it.
The paper actually has three experiments and a total of 6 + 6 + 3 = 15 pairwise tests that are all testing basically the same thing so I would even go further and make the claim that they should be accounting for 15 (and not just 6 tests), especially given the extraordinariness of the claim (see below). The probability of making at least one type I error with 15 tests is now 54%.
Extraordinary claims require extraordinary evidence. The authors are making the claim that .45 glucose + .55 fructose does not equal .5glucose + .5fructose. This would come as a surprise to most physiologists. It's close enough to an extraordinary claim that most physiologists would require extraordinary evidence to be convinced.
* the chance of finding something improbable. The probability of being dealt four cards of the same color is 0.5 * 0.5 * 0.5 * 0.5 = 6.25%, which is not very probable. But if you dealt yourself 4 cards 10 times in a row, the probability of one of these "hands of 4" being all the same color would be much higher than 6.25%. It's why we can win at solitaire. Sometimes.