This deserves funding. To prioritise what needs to be looked at is no minor feat. Most people get lost in quite useless anti quackery.
As chemist I also recognised how medicine has more potential because better evidence standards already exists compared to chemicals (safety or effects) and more measurable, immediate and large scale effects are at play around saving lives (except for a few exceptions like the ozone layer).
To support your case even further: Lets take a look at the biggest industries (2021, yahoo et al, market value) to determine which are the areas of our lives that could be disturbed by fake studies the most.
Financial services 22.5
Construction 12.5trn
Commercial Real estate 9.6trn
E-commerce 9.09trn
Health insurance 8.45trn
IT 5trn
Food 5trn
Oil& Gas 4.5trn
Automotive 3trn
Telecommunications 1.74trn
Pharma 1.6 trn
Except for the following industries -
Food
Pharma
- None of these industries would be easily affected by unnoticed research misconduct & affect as many lives, since they would rather immediately see that some finding is wrong (if they don’t just believe wha is said but look for it*), since no complex human organism is involved & results are rather easily visible and measurable & don’t need too much time to unfold.
Oil & Gas with environmental & health effects, as well as Automotive and Health Insurances could be the exceptions, if the Conflict of interest takes reign & they start to finance fabricated studies (or manipulate testing software*) to get out of their duties & stay in business. Happened before. My gut about global natural disaster statistics tells me falling buildings aren’t killing people on a genicidal level.
So judging by this and the amount of money that goes into medical practice (Health Insurances), this prioritisation is of top relevance even in the off-chance it is not the top priority for where to identify research misconduct (se if some trendy ubiquitous chemical kills or does whatever hormonal with humanity or the ozone layers or atmosphere before we find it out, because we only monitor half an inch of grass on this planet.)
One must also never forget the opportunity costs of lost progress. Research Misconduct in virus and pandemic research are ultimately included in this proposal. climate research isn’t included but one crew can’t do everything, nor can we drop everything and only focus on the climate, which could benefit from more resources for progress & some research integrity on another notice. All in all this pitch is superb, only the “how” can be discussed:
If there are no immediate hints from someone on what study might be fake:
1) Wouldn’t it be more efficient to start with systematic reviews or studies that made it into guidelines or quasi guideline documents?
2) Wouldn’t it then be efficient for the purpose of the institute to prioritise the fraud-check to studies/sys.reviews in guidelines that could affect most people(death/QALY’s) if the results were distorted by fraud.
3) Probably a huge open door: The cases of the links relate to studies of single authors: this suggests checking systematic reviews for fraud robustness by filtering out one author at a time and seeing where the results go. This could help decide which studies should be investigated.
4) Because Cuckoo studies don’t tend to be outliers, how reliable are your methods to detect them? Do these methods detect the entirety of trials that were found in human history to be fraudulent or problematic? This sounds hard to do, since real but not well made RCT’s, could be the one or 2 studies that the results depend on & it could need: raw data, who knows what or even lengthy individual patient level data reviews to check for weird diagnosis summaries and readjudication of diagnostic outcomes until the statistics hide the side effects or make the results look good.
4.1) What I really want to ask is how your institute can hold the long breath it takes to do that, since it takes years to get trial data, via request, leak or court order, if it’s even possible. Or is this excluded from the institutes scope for now? Which would be entirely reasonable as well.
4.nonesense) Give us a brief CV of fraud detection. I want to envy your skill and experience while looking at something like a Google scholar profile only about scientific fraud detection & initiated digital toilet paper retraction.
Please go run about and bother some philanthropists until they give you the money for this institute. There are US tax returns on philanthropic activities and studies that checked who pays whom to promote science. Maybe you could phone everyone on that database.
As to the solution, we have been long advocating the development and implementation of preventing measures that can be very effective.
For example, we approached three major publishing houses (from editor to VP level) and asked them to consider the following experiment. We suggested identifying a journal that receives a large number of submissions and that would agree to modify the Instructions for Authors by requesting authors’ consent to allow a potential assessment of their laboratory notebooks if the manuscript is accepted and published. This proposal has not been accepted by any publisher and the reason was not the cost of the assessments. Despite the low probability of a paper being subjected to an assessment (which could be 1 in 1,000), explicit reinforcement contingencies were thought to endanger the submission rate that could put the publishing business model at risk.
This was not surprising, because feedback control, even under the conditions of the partial reinforcement, can be very powerful. One well-known example is tax systems, where not every tax return report gets audited but the probability of being audited is nevertheless sufficiently high to keep most (certainly, not all) taxpayers law-obedient.
First: it’s so nice to be able to take in your writing in a long form!
Regarding starting: I agree that medicine is probably the strongest place to plant the vanguards’s flag. Easiest and quickest way to show value, 💯
Regarding Funding: easiest money (?) is make this kind of work interesting to insurance companies, be they private (US) or public (anywhere else).
It would be relatively simple to look at the highest cost work being done, and then comb through the associated literature. Finding even a few points of margin could be a radical impact.
This deserves funding. To prioritise what needs to be looked at is no minor feat. Most people get lost in quite useless anti quackery.
As chemist I also recognised how medicine has more potential because better evidence standards already exists compared to chemicals (safety or effects) and more measurable, immediate and large scale effects are at play around saving lives (except for a few exceptions like the ozone layer).
To support your case even further: Lets take a look at the biggest industries (2021, yahoo et al, market value) to determine which are the areas of our lives that could be disturbed by fake studies the most.
Financial services 22.5
Construction 12.5trn
Commercial Real estate 9.6trn
E-commerce 9.09trn
Health insurance 8.45trn
IT 5trn
Food 5trn
Oil& Gas 4.5trn
Automotive 3trn
Telecommunications 1.74trn
Pharma 1.6 trn
Except for the following industries -
Food
Pharma
- None of these industries would be easily affected by unnoticed research misconduct & affect as many lives, since they would rather immediately see that some finding is wrong (if they don’t just believe wha is said but look for it*), since no complex human organism is involved & results are rather easily visible and measurable & don’t need too much time to unfold.
Oil & Gas with environmental & health effects, as well as Automotive and Health Insurances could be the exceptions, if the Conflict of interest takes reign & they start to finance fabricated studies (or manipulate testing software*) to get out of their duties & stay in business. Happened before. My gut about global natural disaster statistics tells me falling buildings aren’t killing people on a genicidal level.
So judging by this and the amount of money that goes into medical practice (Health Insurances), this prioritisation is of top relevance even in the off-chance it is not the top priority for where to identify research misconduct (se if some trendy ubiquitous chemical kills or does whatever hormonal with humanity or the ozone layers or atmosphere before we find it out, because we only monitor half an inch of grass on this planet.)
One must also never forget the opportunity costs of lost progress. Research Misconduct in virus and pandemic research are ultimately included in this proposal. climate research isn’t included but one crew can’t do everything, nor can we drop everything and only focus on the climate, which could benefit from more resources for progress & some research integrity on another notice. All in all this pitch is superb, only the “how” can be discussed:
If there are no immediate hints from someone on what study might be fake:
1) Wouldn’t it be more efficient to start with systematic reviews or studies that made it into guidelines or quasi guideline documents?
2) Wouldn’t it then be efficient for the purpose of the institute to prioritise the fraud-check to studies/sys.reviews in guidelines that could affect most people(death/QALY’s) if the results were distorted by fraud.
3) Probably a huge open door: The cases of the links relate to studies of single authors: this suggests checking systematic reviews for fraud robustness by filtering out one author at a time and seeing where the results go. This could help decide which studies should be investigated.
4) Because Cuckoo studies don’t tend to be outliers, how reliable are your methods to detect them? Do these methods detect the entirety of trials that were found in human history to be fraudulent or problematic? This sounds hard to do, since real but not well made RCT’s, could be the one or 2 studies that the results depend on & it could need: raw data, who knows what or even lengthy individual patient level data reviews to check for weird diagnosis summaries and readjudication of diagnostic outcomes until the statistics hide the side effects or make the results look good.
4.1) What I really want to ask is how your institute can hold the long breath it takes to do that, since it takes years to get trial data, via request, leak or court order, if it’s even possible. Or is this excluded from the institutes scope for now? Which would be entirely reasonable as well.
4.nonesense) Give us a brief CV of fraud detection. I want to envy your skill and experience while looking at something like a Google scholar profile only about scientific fraud detection & initiated digital toilet paper retraction.
Please go run about and bother some philanthropists until they give you the money for this institute. There are US tax returns on philanthropic activities and studies that checked who pays whom to promote science. Maybe you could phone everyone on that database.
First of all, thank you for writing this!
As to the solution, we have been long advocating the development and implementation of preventing measures that can be very effective.
For example, we approached three major publishing houses (from editor to VP level) and asked them to consider the following experiment. We suggested identifying a journal that receives a large number of submissions and that would agree to modify the Instructions for Authors by requesting authors’ consent to allow a potential assessment of their laboratory notebooks if the manuscript is accepted and published. This proposal has not been accepted by any publisher and the reason was not the cost of the assessments. Despite the low probability of a paper being subjected to an assessment (which could be 1 in 1,000), explicit reinforcement contingencies were thought to endanger the submission rate that could put the publishing business model at risk.
This was not surprising, because feedback control, even under the conditions of the partial reinforcement, can be very powerful. One well-known example is tax systems, where not every tax return report gets audited but the probability of being audited is nevertheless sufficiently high to keep most (certainly, not all) taxpayers law-obedient.
First: it’s so nice to be able to take in your writing in a long form!
Regarding starting: I agree that medicine is probably the strongest place to plant the vanguards’s flag. Easiest and quickest way to show value, 💯
Regarding Funding: easiest money (?) is make this kind of work interesting to insurance companies, be they private (US) or public (anywhere else).
It would be relatively simple to look at the highest cost work being done, and then comb through the associated literature. Finding even a few points of margin could be a radical impact.
Maybe do it through Nosek’s group? Idk.
Tricky. I cannot predict insurance companies appetite for such information, although I can probably find out.