Statistical significance =/= clinical significance. I wish they taught this shit in stats.
"In this prospective randomized clinical trial that included 116 adults with overweight or obesity, time-restricted eating was associated with a modest decrease (1.17%) in weight that was not significantly different from the decrease in the control group (0.75%)."
Assuming a person was overweight at 200lbs (easy round figure to work with).
2.34lbs extra weight loss after 8 weeks.
Vs.
1.5lbs weight loss after 8 weeks.
This works out to 0.84lb difference, so basically a pound. Is it the biggest deal? No, not necessarily. Is it better? Yes. Clearly.
That's not really how statistics work though.
If 2 groups of 50 people were subjected to different diets and experienced a weight loss difference of 0.42%, or less than a pound, that does not necessarily mean that at the individual level you are more likely to lose an extra lb with method B compared to method A. There is fluctuation to be expected in any trial, even if both diets were the same. If the difference observed is not above the normal expected fluctuation by a considerable amount, then it's not statistically significant, that's what they mean.
Having the exact same weight loss in both trials would be a very unlikely result, even if you used the same protocol for both groups. A small difference is to be expected, one way or another. The question is if that difference lies outside of the scope of random factors and normal individual fluctuation.
E.g: If you flip a coin 50 times, and then you flip another coin 50 times, even if both are "fair coins", some variation is expected in the results. Getting 25H and 25T x2 is relatively unlikely. If you flip 2 coins 50 times and the results vary very significantly from the spectrum that is highly probable, then you would suspect that there is something about the second coin, beyond chance alone, that is causing the variation. The likelihood of different variations can be calculated, and for something to have statistical weight, it should fall outside of the scope of expected random fluctuation and be an unlikely result of chance alone. How unlikely it needs to be for it to be called "significant" depends on the researchers' methods.
So the question is, whether intervention B achieved a result that went beyond the normal expected variation between 2 different groups of people, and when the study says "not statistically significant" it's because this is not the case according to the researcher's calculations. It doesn't mean that intervention B will necessarily give you +0.42% all, or even most of the time, that's precisely what that phrasing is warning you about.
Even from a clinical perspective, I don't really see how a difference of less than a lb over 8 weeks for an overweight person is particularly important, considering there's other factors that are more important than that, like long term adherence.
Going beyond that, even if we could find out in some way that that average difference was due to a certain method it wouldn't mean that one should consider it "better" either. If method A is better for 40% of the population and method B is better for 60% of the population, then you will get better results on average in trial B. However, at the individual level, it still means that a lot of the time that won't be the case, so it's not really enough to make a prescription at the individual level or call it "better" without more info.