by Mike Critelli,
At MakeUsWell, we strive to educate users on how to improve their ability to self-manage their health and more intelligently partner with health professionals. We emphasize partnering with knowledgeable professionals, not replacing them.
Artificial intelligence can help us manage our health, when we assess food additives, over-the-counter medications, supplements, and adverse interactions among them. We inevitably have to address how to evaluate the underlying research.
Identifying peer-reviewed research from authoritative sources in reputable journals is not enough. Research can fail at several levels—even with the elaborate peer review processes.
We believe AI can help to detect many potential points of failure.
-
Publication of a research study by those who conducted that study is as close to a definitive source of insight as possible, especially when the researchers have a track record of excellence in those chosen fields. But even in these cases, the research’s value needs to be assessed by the following criteria:
- Is the population of the study representative?
- Is the study current?
- Has it been replicated?
- What controls were used to ensure that the study was properly done?
- Is there a clear disclosure about any potential financial conflicts of interest, such as the payment source for the research?
Scientific research is not static. What we believe to be true today can be challenged by other research inconsistent in important respects. Inconsistencies must be resolved. -
Randomized clinical trials, the historic gold standard for research, isolate one or a small number of variables against a control group. The FDA requires proponents for a new drug to demonstrate that the drug is a better choice than the best choice available without it.
But most patients have multiple influences on their health, including genetics, epigenetics, multiple chronic diseases, and other factors that cannot possibly be encompassed in even a large number of clinical studies. When the FDA approves a drug or device, it concludes that it is better than the control therapy for a particular population, but its ultimate effectiveness is knowable only through marketplace experience and what the FDA calls post-marketing surveillance. -
Even when reputably-sourced studies are published in journals with rigorous peer review processes, frauds still occur, and we do not necessarily know how frequently they occur. These frauds are extremely difficult to detect, but easier when there is an AI-driven platform to help.
Some examples of fraud indicators in clinical trials or studies are:- Exceptionally high recruitment success rates and unusually low dropout rates by clinical trials or study subjects.
- The absence of reported adverse events. Adverse events always occur.
- Identical lab results for a highly diverse set of participants in a study.
- Clinical visits that occur at improbable times.
-
Many research articles, “reviews” or “literature articles,” are merely summaries of other research. The challenge in evaluating this type of article is to ensure it correctly cites the primary source material, both in terms of all of the content of the article as well as identifying the source correctly and precisely.
Recently, we saw a retraction of a 2022 research article published by Hindawi—a subsidiary John M. Wiley acquired in 2021—entitled “Toxicological and Teratogenic Effect of Various Food Additives.” It is worth quoting the Wiley retraction to understand how flawed research can get published:“This article has been retracted by Hindawi following an investigation undertaken by the publisher [1]. This investigation has uncovered evidence of ….the following indicators of systematic manipulation of the publication process:
- Discrepancies in scope.
- Discrepancies in the description of the research reported.
- Discrepancies between the availability of data and the research described.
- Inappropriate citations.
- Incoherent, meaningless and/or irrelevant content included in the article.
- Manipulated or compromised peer review.
How can AI provide useful guidance?
- It is on safer, although not completely safe, ground when it cites authoritative regulatory bodies, such as the FDA or European Food Safety Authority. And when these organizations differ in their regulatory standards and conclusions, the AI pIatform can clearly spell out the differences.
- Does the AI platform contain checks and balances to ensure that the research is still correct? Some AI large language models are current through a particular date. The 2024 Wiley retraction of a 2022 research article does not automatically pop up in every large language model. A well-designed platform accounts for such LLM limitations.
- AI should remind users that the scientific data on which it relies can change as new research emerges. Science keeps moving forward. It is never permanently “settled.”
- The guidance offered becomes most useful when users can recognize and appreciate that they are not being fed a series of definitive answers, but are engaging with a guide. And this AI-driven guidance encourages them both to think for themselves and to consult with individuals trusted as having deep domain knowledge.
We hope users will become acutely aware that all chemical or biological medications and supplements have potentially unpredictable consequences, especially when used in abnormally higher quantities or frequencies.
Our AI platform will help users along this journey, both to alert them to the risks of relying on unaided or even flawed research they might undertake and to improve their ability to self-manage their health and wellbeing.