There are a few common argumentative strategies that defenders of science use in these kinds of cases.
One strategy is to blame hype. Someone has taken properly conducted, fallible, limited scientific findings and extended them beyond what the evidence actually indicates. Often the blame will be placed on journalists (see confused's comments). But scientists themselves engage in hype. And evidence is ambiguous. It doesn't actually say anything; it requires human interpretation and generalization. For instance, suppose we have a feeding study that used 60 labs rats and lasted 90 days. Do the results of this study tell us anything about humans and other mammals? Or do its results only apply to rats? Or, indeed, maybe its results only apply to these particular 60 rats. Because scientists can reasonably disagree about how far the evidence can be extended, it's often unclear where to draw the line between "responsible inference" and "hype."
Another strategy is to point to non-epistemic values. Epistemic values are factors such as simplicity that (we think) tend to lead us to true conclusions. Non-epistemic values are other factors, that (we think) don't tend to lead us to true conclusions, such as a concern to protect human health or to make a lot of money. One common view, the value-free ideal, holds that non-epistemic values have no legitimate role to play in evaluating hypotheses.
As a defense of science, we might say that non-epistemic values play a role in some cases, and this explains the few problem cases; but that, on the whole, scientists act according to the value-free ideal, so we should trust scientists.
Critics of science make a similar appeal to value-free science. But they might argue that non-epistemic values are widespread in science. You haven't told us much about your relatives. But, knowing the type, I suspect they might think non-epistemic values run rampant in "mainstream medicine." Namely, they might think that medical research and practice are dominated by the pharmaceutical industry, who want to sell us lots of expensive drugs and treatments in order to make a lot of money. Because "mainstream medicine" is saturated with these profit-seeking non-epistemic values, it shouldn't be trusted.
You might respond that, in the case of nutritional studies, we're typically talking about "whole foods" — eggs, coffee, meat — not highly processed foods, much less pharmaceuticals. Often this research is sponsored by the relevant industry — the egg industry sponsored a lot of the research showing that eggs don't raise our cholesterol, for example. But this industry influence has nothing to do with the influence of the pharmaceutical industry on biomedical research.
That's not a very compelling response, though. I think a better response is to question the value-free ideal. Why think that non-epistemic values are necessarily bad for science?
One useful alternative to the value-free ideal is called inductive risk. The framework was promoted by Heather Douglas, especially through her book Science, Policy, and the Value-Free Ideal. Inductive risk argues that we should take the non-epistemic consequences of a hypothesis into account when we evaluate it. Consider a breast cancer screening. A false negative result (there is cancer, but the test says there isn't) could lead to an avoidable death, while false positive results (no cancer, but the test says there is) could lead to unnecessary surgery and chemotherapy. We should take these consequences into account — which consequences are worse — when we evaluate the results of the screening. According to inductive risk, this is a legitimate way that non-epistemic values can influence scientific reasoning.
Inductive risk can help us interpret nutritional research. Suppose an (imaginary) study indicates that ketchup increases the frequency and severity of migraines. If we accept this finding, we would probably avoid or stop eating ketchup. If you, like me, don't like ketchup, then this wouldn't be a big deal. I might start avoiding ketchup a little more actively, just in case. Or, someone who's already prone to migraines and strongly wants to avoid getting more might avoid ketchup, even if they really like it. By contrast, if you LOVE ketchup and don't get migraines, you might reasonably reject this finding, or keep eating ketchup until we have more evidence.
In other words, the study's finding sets up a tradeoff between the pleasure of eating ketchup and the risk of migraines. Inductive risk says it's legitimate to interpret the evidence in light of where we stand on this tradeoff.
Now suppose the Ketchup Manufacturer's Association sponsors a study finding that ketchup does not increase the frequency and severity of migraines. From an inductive risk perspective, the Ketchup Manufacturer's Association probably has a very strong preference for people eating lots of ketchup, and probably doesn't care about migraines. But most people, I assume, would want more balance in their ketchup-migraine tradeoff. This means that most people would interpret the study's findings differently from the Ketchup Manufacturer's Association. Maybe the new evidence nudges us a little bit towards "ketchup is probably okay after all." But just a little bit; we might still try to reduce the amount of ketchup we eat.