Thursday 19 December 2013

DH and Hunt are disingenuously spinning data


The NHS is in crisis because the current government have wasted billions in yet more corrupt market based reforms which will enrich private firms at the expense of the tax payer.  Sadly those in control, Jeremy and the Department of Health, don't want the truth to be revealed and are frantically trying to spin reality in their favour.  For example yesterday Hunt said on Twitter:

"This yr gave NHS more financial help than ever b4 -for 2900 more staff"

NHS spending is barely increasing and it is going up at a rate well below normal inflation, therefore in the real world NHS funding is being cut, year on year, and this is having the inevitable knock on effects on services, local providers are cutting services left, right and centre as a result.

The talk of 'increasing staffing levels' was echoed by another from the DH propaganda ministry, Tim Jones, the head of news at the DH:

"Oh, and er, 4,400 more clinicians since 2010"

Hunt, Jones and DH are engaged in a disingenuous game of tricking the public and misrepresenting the reality of what is going on in the NHS on the front line.  It is easy to cherry pick time points to pretend staffing is increasing as Hunt and Jones did yesterday, but the above graph shows that this is extremely misleading.  Staffing levels have been static in recent years, as evidence above and trying to pretend anything else is simply shameless political spin.  Here is a graph of the net change in staffing levels, showing that claims of a recent increase are highly disingenuous:


So here we have a situation whereby those in control are proving they cannot be trusted.  The reality on the ground shows crisis in emergency departments, staffing shortages and numerous service cuts.  Hunt and the DH cannot be trusted on the NHS, proving this government are on their last legs. 

Friday 8 November 2013

Department of Health has no record of Brian Jarman's meeting with Jeremy Hunt


 
Firstly I do not agree with Brian Jarman on many things but I have the utmost respect for his integrity as a researcher and a human being.  Therefore I have no doubt then when he says the following, he is speaking the truth:
 
"I met Jeremy Hunt on the afternoon of Monday 15th July for about 40 minutes I'd estimate."
 
It is therefore very strange that when I emailed the Department of Health and asked them for their records of the above meeting under the Freedom of Information act, they replied:
 
"The Department of Health does not hold information relevant to your request."
 
This is strange.  Surely Jeremy Hunt should be recording what went on at such meetings, given he is an elected MP and the Secretary of State for health?


 

Tuesday 29 October 2013

Why does my shoulder hurt?

The aim of this piece is to summarise my understanding of why things hurt and specifically, why the shoulder hurts. Obviously this is one monstrously huge topic and it is beyond the scope of a short blog piece to describe everything in minute detail. It is important to bear in mind that this is just one person’s opinion, and although I have tried to keep things as evidence based as possible, there are numerous huge gaps in our knowledge, it is therefore perfectly normal to become confused the more one delves into these great unknown areas. I hope this piece does confuse you, as the confusion has the potential to open your mind to appreciating these big unknowns, and perversely seeing all in black and white is probably a sign of not really understanding the topic.

 
Pain and its importance

Firstly pain is a very subjective entity and can be defined as:

“An unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage.”

This definition conveys just how both central peripheral systems are important in pain processing and pain development. It is vital to remember that pain is frequently a totally necessary and useful sensation. Pain after an acute injury reminds us to be careful and forces us to titrate our activity levels to a level that can be tolerated by the healing tissue. Interestingly an acute injury, such as after blunt trauma or a burn, the normal pain response involves an increase in pain sensitivity in both peripheral (peripheral sensitisation) and central (central sensitisation) pain processing (nociceptive) structures. The artificial division between peripheral and central is at the level of the spinal cord, with the sensitivity at the cord and above deemed ‘central’, while sensitivity peripheral to the cord is deemed ‘peripheral’.

Our nociception systems are not just simple afferent sensors of tissue damage, as once thought, meaning that higher systems such as emotion and mood can have powerful downward effects on the peripheral nervous system via descending modulatory circuitry. This descending circuitry consists of both direct neural routes and indirect endocrine routes such as the hypothalamic-pituitary axis. These descending systems may not only affect pain perception, but also peripheral tissue homeostasis and healing. The peripheral nervous system’s important afferent role in tissue homeostasis and healing has been relatively under researched and is consequently rather poorly understood, but it is well known the denervation is highly detrimental to the healing of both ligament and bone (http://www.ncbi.nlm.nih.gov/pubmed/12382964).

Pain may occur in the presence or absence of tissue damage, consequently there is a huge variability in the patterns of peripheral and central changes in patients presenting with shoulder pain. Some patients have significant tissue abnormality, for example a rotator cuff tear, with a very peripheral pattern of pain sensitivity, while others have no significant tissue abnormality and a very central pattern of pain sensitivity (http://www.ncbi.nlm.nih.gov/pubmed/21464489). It is also worth noting that many patients have significant tissue abnormalities without any pain or functional symptoms at all (http://www.ncbi.nlm.nih.gov/pubmed/10471998).

Pain and structure

Despite the high number of asymptomatic patients with rotator cuff tears, there is still a very strong relationship between pain symptoms and rotator cuff integrity. Rotator cuff tendinopathy (RCT) is the commonest ‘cause’ of shoulder pain and accounts for about three quarters of the patients presenting. Patients with rotator cuff tears are far more likely to have symptoms than those without (unpublished work by Oag et al), while previously asymptomatic patients are far more likely to develop symptoms as their tear size increase (http://www.ncbi.nlm.nih.gov/pubmed/16882890). It is unlikely that any one specific factor which precisely determines symptom development will ever be identified, pain is complex and many factors may contribute to its development. The glenohumeral joint kinematics change as tear size increases but this abnormality is equally prevalent in both symptomatic and asymptomatic patients (http://www.ncbi.nlm.nih.gov/pubmed/10717855,http://www.ncbi.nlm.nih.gov/pubmed/21084574).

Although the rotator cuff tendons are the most common ‘source’ of shoulder pain, damage to any structure may result in pain development, other common sources being the glenohumeral joint and capsule, the labrum, the acromioclavicular joint and the cervical spine. It can be incredibly difficult and sometimes impossible to pin point the exact ‘source’ of pain using history, examination and radiological investigations for a wide variety of complex reasons. The more a patient is highly ‘centrally sensitised’ the more difficult it can be to reach a specific diagnosis, this may be demonstrated by clinical features such as pain radiating all the way down the arm and tenderness to palpation over a wide area. It is important to realise that highly centralised patients may have a very treatable peripheral pathology in isolation, but is also vital to think of nerve entrapment in the cervical spine as part of one’s differential diagnosis.

History, examination findings including special tests and diagnostic injections (bursal injection or nerve root blocks) are all undoubtedly useful, but they can all be problematic due to a lack of specificity, for example a C6 nerve root block may abolish a patient’s ‘cuff tear related’ shoulder pain as a result of C6 innervating the shoulder joint. Both rotator cuff tears and cervical spine degeneration become increasingly common with age, meaning that many patients may well have dual pathology, and it may only sometimes be recognised that a significant contributing pathology has been missed after failed surgical intervention. Tissue abnormality can only explain so much in terms of symptomatology, and the fact that patients with massively different tissue abnormalities can have such different levels of symptoms is likely explained by individual variations in nociceptive processing, both peripherally and centrally.
Time and the placebo are great healers
The treatment of shoulder pain depends on a great number of factors including the patient’s specific diagnosis, the patient’s preferences and the treating clinician’s choice of strategy. As a natural cynic I have become increasingly convinced that many widely used treatments have little efficacy beyond the placebo effect. The placebo effect is incredibly powerful and useful in augmenting any particular treatment strategy, the patient’s belief and expectation can be huge drivers for symptomatic improvement, whether or not a treatment has a significant ‘treatment effect’.

The body has a tremendous capacity for dampening down pain over time, and amazingly this process is hardly understood at all. Huge numbers of patients with significant symptoms simply get better without any intervention, despite there being no improvement or a significant deterioration in their structural tissue abnormality. Therefore many simply never consult their primary care clinician or their secondary care specialist. Many patients do not ‘get better’ but are reasonably content to live with a certain level of pain and disability, after all this is part of what normal ageing entails; we have to be realistic about what can be achieved and setting real world patient expectations is an important part of any consultation.

It is certainly beyond this short piece to bore you with all the evidence for all the possible treatments for all the possible diagnoses of shoulder pain. What is of value is remembering that every individual is different, with a unique pattern of peripheral tissue changes, combined with certain highly variable peripheral and central nervous system changes, and that these drivers of pain symptomatology are well worth considering when embarking upon any particular treatment plan. Everything we do is imperfect; diagnosis is fraught with problems with sensitivity and specificity, we have no cures for the age-related disorders that result in so much pain and disability in those they affect. First we should try not to do any harm, and secondly we should try to appreciate that pain is a very complicated and confusing entity.

Further reading

A summary of the key mechanisms involved in shoulder pain can be found here (http://www.ncbi.nlm.nih.gov/pubmed/23429268) at the British Journal of Sports Medicine.

Friday 20 September 2013

Professor Jarman's reliance on the c-statistic

There is so so much more to this UK/US HSMR comparison that first meets the eye, at first glance there were some significant flaws and assumptions made, but the more digging I have done, the more gaps I seem to be finding in certain peoples' robust conclusions.  Many thanks to Prof Jarman for the links and data he has provided so much, it is well worth while reading up on the methodology behind HSMRs in the first instance.  One thing it is vital to understand is the so-called 'c-statistic' which is a basic measure as to how well a model fits the mortality data for a particular group of patients:

"The success of case-mix adjustment for accurately predicting the outcome (discrimination) was evaluated using the area under the receiver operating characteristic curve (c statistic). The c statistic is the probability of assigning a greater risk of death to a randomly selected patient who died compared with a randomly selected patient who survived. A value of 0.5 suggests that the model is no better than random chance in predicting death. A value of 1.0 suggests perfect discrimination. In general, values less than 0.7 are considered to show poor discrimination, values of 0.7-0.8 can be described as reasonable and values above 0.8 suggest good discrimination."

and

"As a rank-order statistic, it is insensitive to systematic errors in calibration"

The second quote is particularly salient, as one of the key flaws in Professor Jarman's US/UK comparison may be that there were systematic differences between the admissions policy and coding in the different countries, this would not be detected by the c-statistic.

It is then interesting to look at Dr Foster's data concerning the c-statistics they have obtained for the clinical conditions that they use to determine the HSMRs of UK hospitals.  Dr Foster routinely uses 56 diagnostic groups (contributing towards about 83% of UK deaths) and they are defined according to the ICD codes.  Of note much of Prof Jarman's UK/US comparison only used 9 diagnostic codes,  which is a little strange in itself, why not use all 56 diagnostic codes?  These 9 codes covered less than half of the deaths in both countries.  I have listed the codes used the their individual c-statistics based on Dr Foster's 2012 report:

Septicaemia - 0.792 (reasonable)
Acute MI - 0.759 (reasonable)
Acute heart failure - 0.679 (poor)
Acute cerebrovascular event - 0.729 (reasonable)
Pneumonia - 0.838 (good)
COPD/Bronchiectasis - 0.714 (low end of reasonable)
Aspiration pneumonitis -0.711 (low end of reasonable)
Fractured Hip - 0.756 (reasonable)
Respiratory failure - 0.745 (reasonable)

These c-statistics are not very impressive, in must be remembered that 0.5 is effectively the zero, and many of these c-statistics are around the low end of reasonable.  It is interesting that Professor Jarman quotes the overall c-statistic for his 9 code model as being 0.921.  Given that he individually compared each HSMR for each code, surely he should be giving the individual c-statistics for each country's subgroup for each specific code?  Professor Jarman has not provided this data, it would certainly be interested to see the c-statistics for the UK and the US for each code, to see if there was a relationship between c-statistic disparity and HSMR disparity.

It is also interesting that 75 of 247 of Prof Jarman's mortality models failed their statistical Goodness of Fit tests.  The measure of how well the models generally fit is also pretty poor (mean Rsquared of 0.25).  It must also be reiterated that the c-statistic will not pick up errors in calibration, so if one country is systematically up-coded relative to another, then the c-statistic will not detect this. The one key question I would like to see answered, is just how did Professor Jarman select these 9 codes for the US/UK comparison?  There are also other questions, like which models failed the Goodness of Fit tests and did the codes assessed have reasonable r-squared values?  There is so much beneath the surface here, I am convinced this story will run and run.

Thursday 19 September 2013

HSMRs, coding, big assumptions and implausible conclusions....


The second major flaw in Prof Brian Jarman's UK/US mortality comparison is that it assumes HSMRs are reliable and coding is equivalent in the UK and US.  If either of these assumptions is false then the comparison is definitely on extremely dubious foundations.   So firstly to HSMRs, one can certainly do far worse than to read this summary by Prof Speigelhalter:

"The two indices often come up with different conclusions and do not necessarily correlate with Keogh’s findings: for example, of the first three trusts investigated, Basildon and Thurrock was a high outlier on SHMI but not on HSMR for 2011-12(and was put on special measures), Blackpool was a high outlier on both (no action), and Burton was high on HSMR but not on SHMI (special measures). Keogh emphasised “the complexity of using and interpreting aggregate measures of mortality, including HSMR and SHMI. The fact that the use of these two different measures of mortality to determine which trusts to review generated two completely different lists of outlier trusts illustrates this point." It also suggests that many trusts that were not high on either measure might have had issues revealed had they been examined."

HSMRs and SHMIs are not useless but they are far from perfect, even when monitoring the trend of one individual hospital's mortality over time.  They are more problematic when comparing hospitals, as variations in coding can have huge effects on the results.  This BMJ article highlights many more concerns over the use and validity of HSMRs, while there are many excellent rapid responses which also highlight many more associated problems.  Here are some other studies outlining problems with the reliability and/or validity of HSMRs ( paper 1, paper 2, paper 3).  This particular segment highlights a huge flaw in HSMRs and in Jarman's UK/US comparison:

"The famous Harvard malpractice study found that 0.25% of admissions resulted in avoidable  death. Assuming an overall hospital death rate of about 5% this implies that around one in 20 inpatient deaths are preventable, while 19 of 20 are unavoidable. We have corroborated this figure in a study of the quality of care in 18 English hospitals (submitted for publication). Quality of care accounts for only a small proportion of the observed variance in mortality between hospitals. To put this another way, it is not sensible to look for differences in preventable deaths by comparing all deaths."

This is the crux of it, meaning that a 45% difference in acute mortality as a result of poorer care is utterly implausible.  Now to coding, there is a long history of up-coding in the US and inadequate incomplete coding in the UK.   Here is one of the best papers proving that HSMRs is hugely affected by admission practices and coding, something that Prof Jarman seems to be unwilling to consider having any effect on the US/UK HSMR difference:

"Claims that variations in hospital standardised mortality ratios from Dr Foster Unit reflect differences in quality of care are less than credible."

I think this applies to Prof Jarman's latest conclusions on US/UK HSMRs, in my opinion his conclusions are less than credible also.  There is also an excellent NEJM piece on the problems with standardised mortality ratios, rather a reliable and eminent journal.  There is one recurrent theme in the academic literature and it appears to me that HES data and standardised mortality ratios are not reliable.  Another recurring theme is that the one person defending them often seems to be a certain Professor Brain Jarman, read into that what you will.  


There are so many problems with Professor Jarman's work and conclusions that it is hard to sum it up in one piece, really one needs a whole book.  Firstly the underlying HES data is unreliable, secondly HSMRs are not reliable and are highly subject to different coding practices/admission practices, thirdly the US and UK are highly likely to be at opposite ends of the spectrum in terms of coding (up versus down) and are also likely to have extremely different admission/discharge practices, fourthly the differences in UK/US HSMRs being down to care is utterly implausible, and fifthly the UK's massive mortality improvements over the last decade are also utterly implausible.  It appears Professor Jarman has unsuspectingly scored an own goal, the HSMR has revealed itself with such implausible results.

Monday 16 September 2013

Many big assumptions have been made: 1. HES Data


The US/UK HSMR comparison made by Prof Brian Jarman is continuing to rumble on and probably not for reasons that will please the Professor.  Since the leaking of the data to the media a few days back, several astute observations have been made that cast great doubt upon Prof Jarman's conclusions  This is my take on events and my summary of the problems with Prof Jarman's stance, this is part 1 on HES data:

HES data is of low quality and is unreliable (citation 1,citation 2, citation 3, citation 4, citation 5)

"Concerns remain about the quality of HES data. The overall percentage of admissions with missing or invalid data on age, sex, admission method, or dates of admission or discharge was 2.4% in 2003. For the remaining admissions, 47.9% in 1996 and 41.6% in 2003 had no secondary diagnosis recorded (41.9% and 37.1%, respectively, if day cases are excluded). In contrast to some of the clinical databases, if no information on comorbidity is recorded, we cannot tell whether there is no comorbidity present or if comorbidity has not been recorded. Despite these deficiencies, our predictive models are still good. In the most recent report of the Society of Cardiothoracic Surgeons, 30% of records had missing EuroSCORE variables. Within the Association of Coloproctology database, 39% of patients had missing data for the risk factors included in their final model. A comparison of numbers of vascular procedures recorded within HES and the national vascular database found four times as many cases recorded within HES."

and this from a letter by authors including Bruce Keogh:

"This latest study raises more concerns about hospital episode statistics data, showing that errors are not consistent across the country."

and this from Royal College of Physicians,as recently as 2012:

"This change is necessary because the current process for the collection of data for central returns that feed HES, and the content of the dataset itself, are both no longer appropriate for the widening purposes for which HES are used. "

I can find little to support Brian Jarman's stance claiming that HES data is accurate and reliable.  Prof Jarman's study relies massively on HES data, as it is this very data from which his HSMRs are calculated.  It would be fascinating if Prof Jarman could produce some published evidence to support his stance on HES data.  If the UK data is less complete than the US data, it could well lead to a massive difference in HSMRs that is nothing to do with care standards, but that is purely down to data quality.  The HSMR is another part all in itself.  

Thursday 12 September 2013

Channel 4's 'UK/US hospital mortality' story is based on Jarman's sandy foundations


Last night I sat down to watch the Channel 4 news and was deeply upset by what was then to follow.  The 'news' exclusive on the 'increased mortality' in UK hospitals versus those in the US was presented as if the data to prove this theory was robust, there was no discussion of the huge flaws in the methods used and the panel discussion was completely one sided.  Channel 4 gave no neutral academics a chance to speak and they gave no one the chance to defend the NHS.  It was trial and execution by a biased one man band, it was shoddy journalism at its worst, it was very disappointing and very unlike Channel 4's normally excellent coverage.  I shall be factual and careful with what I shall say next, as it appears some cannot listen to criticism of their pet methodologies without resorting to threats of GMC referral, not the sign of a robust argument I would say.

The story claimed that patients was 45% more likely to die in the UK than the US, when admitted with acute illness such as pneumonia and septicaemia.  This was based on 'research' done by Professor Brian Jarman, he of Dr Foster fame and a big supporter of the HSMR tool (Hospital Standardised Mortality Ratio).  It must be noted that Dr Foster are rather close to the Department of Health, with the latter being obliged to promote the business interests of Dr Foster 'intelligence'.  It is worth reading about the rather cosy relationship involving Jarman, Dr Foster and the government, because it puts Jarman's potential motives into the open, conflicts of interest are often key in understanding such matters.

 Essentially the Channel 4 story was based upon several assumptions that look rather naive, flawed and ignorant to anyone that has a basic grasp of scientific evidence and statistics.  Firstly the UK mortality data is based upon HES (Hospital Episode Statistics) data, which is notoriously inaccurate and unreliable.  For example in the recent past HES data showed there were 17,000 pregnant men in the UK, a truly unbelievable statistic.  There is also abundant evidence showing that HSMRs themselves are very poor tools, even for comparing hospitals within the same country, let alone different continents.  HSMRs are crude tools and many academics feel their use should be abandoned entirely.

"Nonetheless, HSMRs continue to pose a grave public challenge to hospitals, whilst the unsatisfactory nature of the HSMR remains a largely unacknowledged and unchallenged private affair."

The above quote shows exactly what the problem is, here we have a flawed and scientifically dubious measure being used when it should not be used, as a political bandwagon and some vested interests are running out of control.  Brian Jarman's baby is the HSMR and he is a powerful man with a large sphere of influence.  It is notable that even the Keogh review was rather critical of the use of HSMRs and the way in which they had been inappropriately use to create divisive anti-NHS propaganda.  There are so many things that can change the HSMR, and many are absolutely nothing to do with the actual quality of care provided.

In simple terms if you put poor quality in, then you get dubious data out and will reach dodgy flawed conclusions.  Firstly HES data, on which the UK mortality rates are based, is poor and this means that mortality cannot be adequately adjusted for confounding factors.  The data is so inaccurate that information about illness type, severity of illness and co-morbidities cannot be adequately adjusted for in the statistical modelling.  The way data is coded differently in the US and UK is also likely to have a massive effect on the results.  Generally HES data is poor and patients are under-coded, ie their illness and co-morbidities are not coded to be as bad as they actually are.  However the exact opposite is true in US coding, they have a vast army of bureaucrats and have had a marketised system for decades, meaning that over-coding is rife, ie hospitals exaggerate the illness and co-morbidities of patients in order to increase their revenues.  There have also been huge issues with the accuracy of the US data.  There is also a huge volume of evidence showing that over-charging as a result of over-coding has been rife in the US, estimates put the cost of over-coding in the multi billion dollar ball park.

Overall this is a case study in dodgy data and dodgy journalism leading to what appears a hugely erroneous conclusion.  Brian Jarman has been reckless and irresponsible in my opinion, he has tried to justify his leaking of unpublished data but his squirming doesn't cut the mustard for me.  In honesty Brian Jarman simply doesn't know why the HSMRs are so different in the UK and US, interestingly the UK outperformed the US on fractured hip mortality, a fact sadly ignored by the news and not mentioned by Jarman in his interview.  This is also rather salient as hip fractures are one of the few things coded accurately in the UK, as it has been attached to a specific tariff for years, again yet more evidence that coding is responsible for the differences in HSMRs and not care quality.  Overall the likely explanation for the differences is the huge discrepancies in terms of coding practice between the US and the UK, in the US they tend to over-code and in the UK we tend to under-code. The huge effects of coding and admission practices on HSMRs have been well made in the BMJ in the past.  There is also the fact that many US patients are discharged from acute hospitals to die elsewhere, and Brian Jarman has admitted this hasn't been taken into account in his calculations.

Brian Jarman should not be leaping to such conclusions and he should be publishing in a reputable journal before recklessly leaking such propaganda to the media.  I wonder if Brian Jarman has done any work at all to investigate the differences between coding in the US and the UK, or did he simply say what he was nudged towards by the Department of Health?  I also wonder if Brian Jarman can claim his data is adequately corrected for confounders when it is based on unreliable and inaccurate HES data?  Personally speaking I have lost all respect for Brian Jarman and am now highly suspicious of the cosy nepotistic little relationship involving the government, Dr Foster and Brian Jarman.  It is just extremely sad that Victoria McDonald was played and used in such a game.  This whole episode has shown very clearly that one thing the NHS needs is good independent statistics and high quality data, and Dr Foster is definitely not going to be part of a progressive solution.

Friday 2 August 2013

MAST cosying up with the chiroquacktic community

It is interesting that MAST Medical are making such efforts to spread the results of their research within the chiropractic community.  The AECC (Anglo-European College of Chiropractic) strangely posted the above page commenting on MAST treatment and suddenly removed it from their website, without explanation.   The full page can be accessed here.  Amongst the interesting quotes are the following:

"Groundbreaking new research has shown that up to 40% of people suffering from chronic lower back pain could now be cured from a simple and inexpensive course of antibiotics........Neil Osborne, Director of Clinic at the College is one of the first practitioners in the UK to undergo a Modic antibiotic spine therapy (MAST) course which puts him at the forefront of this new procedure......The new treatment could save the UK economy billions of pounds"
 
It reads like an uncritical press release for MAST.  The 40% figure is a massive over-exaggeration of reality and the 'billions' saved is also pie in the sky stuff.  Neil Osborne is now at the 'forefront' of this new 'procedure' apparently, or rather he has parted with some cash to be told about the a trial by the very researchers who carried out the trial!  The British Association of Spinal Surgeons have released some far more sensible information on this antibiotics for back pain story:
 
"BASS considers this to be a well conducted trial which provides evidence that a small number of patients could gain some moderate improvement in their condition with a course of antibiotics." 
 
This is a far more honest and representative opinion of the research of Albert et al.  It is also rather fascinating that Hanne Albert, the lead author who thinks that a conflict of interest is not a conflict of interest, is speaking at the McTimoney chiropractic conference this November.  One must remember the track record of the chiropractic profession in plugging unproven and sometimes dangerous treatments, ignoring scientific evidence, suing their critics, exploiting patients with misinformation and indulging in general quackery.  Have a quick peek at these articles on McTimoney too.
 
The fact that MAST are cosying up to the chiropractic profession says a lot.  It seems MAST and the chiropractic profession have rather a lot in common.  It is deeply inappropriate and unethical that MAST are plugging antibiotic treatment for back pain in such a manner.  Firstly practice should not change on the basis of one trial that has not been replicated independently, especially when two of the authors failed to declare their significant conflicting business interests to the publishing Journal.  Secondly the way in which MAST Medical are effectively advertising their services and seemingly exaggerating the potential gains of treatment appears rather cynical to me.  I would not touch MAST Medical with a bargepole until their research has been independently replicated by a trustworthy research group.
 

Wednesday 31 July 2013

Formal complaint letter sent to European Spine Journal

 
Dear Editor
I am writing as regards the recent publications by Albert et al (1, 2) in the European Spine Journal (ESJ).   These are the undisputable facts of the case:
-          Both Albert et al studies were submitted and all authors declared no conflict of interest in 2012 (1, 2)
-          Two authors (Albert and Manniche) are company directors of both ‘MAST MEDICAL EDUCATIONAL SERVICES LIMITED’ and ‘MAST MEDICAL CONCEPT LIMITED’
-           ‘MAST MEDICAL EDUCATIONAL SERVICES LIMITED’ and ‘MAST MEDICAL CONCEPT LIMITED’ were both incorporated in April 2010
-          The ESJ’s guidelines on conflict of interest state “Conflict (if none, “None” or describe financial interest/arrangement with one or more organizations that could be perceived as a real or apparent conflict of interest in the context of the subject of this article)"
 
These are some undisputable facts from the recent correspondence in the ESJ:
 
-          The Journal has been made aware of the undeclared conflicts of interest of two authors (Albert and Manniche) relating to the two studies  (as stated (3))
-          The ESJ has allowed the lead author to publish a response to the ‘ No conflict of interest?’ letter which refuses to acknowledge that being a director of a limited company whose work directly relates to the results of the studies is a conflict of interest (“there was no conflict of interest to declare” (4) )
-          The ESJ Editor has not acknowledged that the authors have failed to declare their conflicts of interest (5)
 
Therefore in conclusion I would like to lodge a formal complaint to the ESJ because of the failure to enforce your own clear guidelines on conflicts of interest.  This has been manifest by:
 
1.      The ESJ has issued no corrections to the two studies to include the conflicts of interest of the two authors (Albert and Manniche)
2.      The lead author has published a peer reviewed letter  in the ESJ denying that a conflict of interest is a conflict of interest
3.      The Journal has not at any point (Editorial by Editor and invited Editorial) acknowledged that the two authors failed to declare these conflicts of interest
 
If the ESJ is of the opinion that two authors being MAST company directors does not constitute a conflict of interest, then I would be interested to see how this stance could possibly be justified using the Journal’s guidelines?
 
I would like to make it clear that unless adequate action is now taken by the Journal, I shall be taking this case to COPE,
 
Kind regards
 
 

Monday 29 July 2013

Oh Dear - European Spine Journal humiliates itself further!

 

It seems that the European Spine Journal thinks that the more it ignores something, the more likely it is to go away, alas this is not the way in which the real world works.  The Editor of the Journal has waded into the debate with some rather confused and illogical words.  It seems the those who dare criticise a clear undeclared business interest directly relating to the study results are 'moral preachers'!  I would say that those who point out the failings of peer review in an objective manner are simply fair honest people who are trying to see science done in the right way, both ethically and morally:

With a statement as well as an answer to one of the letters to the editors by the principal author (H. A.), the European Spine Journal reacts on numerous accusations mainly from self-nominated moral preachers of the lay press that it has published the two papers without checking the disclosed ‘‘no conflict of interest’’ statement."

The Editor completely ignores that fact that the authors failed to declare a clear conflict of interest:

 "As Editor of the journal, my reply is twofold:

 1. The quality and the originality of research are not less, even if the author has a so-called ‘‘conflict of interest’’, more so when this ‘‘conflict of interest’’ has nothing to do with the process of research.

 2. Every author who wants to publish a paper in the European Spine Journal has to sign a ‘‘no conflict of interest’’ statement before the paper is accepted for publication. The European Spine Journal has neither the capacity nor the size to check the truth of every author’s statement. Here, we have to rely on the honesty of the authors. If an author is not honest and lies to the journal, then this is his/her own responsibility."

In response to point 1, it is entirely irrelevant as the conflict of interest was NOT disclosed at the time of submission, if it had been there would be no problem.  Sadly for the authors it was not declared and this changes the situation completely, an undeclared conflict of interest directly relating to the study content is a clear case of 'research misconduct'.  The authors signed 'no conflict of interest' when two of them had a clear conflicting business interest with MAST Medical.

In response to point 2, it is fair that the Journal missed the conflict of interest initially, I have never said this was their fault, the authors need to take the blame for this.  However the problem is that it has been subsequently pointed out to the Journal by my letter that a clear conflict of interest was not declared by the authors.  The Journal has since failed to acknowledge this failed declaration and has not issued any form of correction to include the previously undeclared conflicts of interest; the Journal has also allowed the author to deny the conflict of interest is even a conflict.  Finally there is some appalling condescension in the final paragraph:

"The journal can only ban such an author from future publishing in the European Spine Journal and in severe cases the journal may publicly announce the withdrawal of the publication in question. However, such a decision has to be proportional and is only justified when the conflict of interest manipulates the methodology and results, in other words, the quality and the honesty of the research data. This is clearly not the case in the two papers of H. Albert et al. It is not up to the lay and public press to make themselves the judges about content, which is geared towards a selective community, in this case the spine research community."

It is the Journal's job to make it clear to ALL readers when authors have a conflict of interest relating to the study content.  In this case the European Spine Journal has dismally failed to do this, having been clearly informed of a clear conflict of interest of the authors by my letter.  The Journal has failed to apply its own guidelines on conflicts of interest, which are remarkably clear on this.  Scientific research should be open to anyone to read and clear conflicts of interest should be declared honestly so that any reader is aware of the circumstances surrounding the research.  The Journal has failed dismally on this issue and instead prefers to launch attacks against those who dare objectively criticise their clear governance failings.

Friday 26 July 2013

The BMA ignores its members

The BMA has recently entered negotiations on the junior doctor contract, and for the first time in quite a few years they have surveyed their members for their opinion.  Strangely the BMA has never asked its members for their opinion on working hours before. The survey was analysed by Ipsos Mori but the questions were all crafted by the BMA:

"The questions were designed by the BMA (without input from Ipsos MORI) and covered working hours, pay, quality of life and training, as well as key demographics such as gender, region, grade and speciality."

The BMA press release focused on the dangers of extra hours and based this on weak anecdote, while it completely ignored the key and striking finding that so many junior doctors want the EWTD hours limit increased:

"Many felt that the limit should be raised to better reflect reality." and "While some respondents felt that 48 hours average per week was an acceptable limit of working hours, others mentioned a need to increase the working hour limit. Broadly similar numbers of respondents expressed either view."

Notably it is interesting that the BMA survey is rather flawed methodologically, something that is inevitable when individuals try to survey, but when an organisation with the money and infrastructure of the BMA does a survey, is this acceptable?:

"This means that the responses are not representative of the population of any audience as a whole. The findings cannot therefore be extrapolated to the overall populations."

So the BMA has spent a lot of time and money on a survey, and they claim the results are pretty meaningless. So how on earth can they justify hyping up the dangers of extra hours based on weak anecdote, while completely ignoring the views of 'many' on wanting to work more hours.  Not impressive.  There were some other key findings including the need for better shift patterns and for key elements of training to be part of the contract.  But overall I can't help but think that this was a massive missed opportunity?

The cynic in me feels that this was very deliberate from the BMA.  They wanted a woolly qualitative survey as this was something which could be used to neatly ignore all the opinions that they dislike, the weak methodology allowed for this wiggle room and it is very convenient for them.  The hypocrisy of their selective use of anecdote is truly breathtaking; either the survey is too weak to rely on and they should not have hyped up the dangers of extra hours, or they should have given fair coverage to the huge numbers who want more hours.  The BMA's biased and selective use of the results is an insult to their membership.  It is however telling that despite the BMA's spin, many doctors are deeply unhappy with the EWTD hours limit and want more hours.

Monday 22 July 2013

The European Spine Journal appears to support research misconduct


The 'antibiotics for back pain' story broke several weeks ago to a huge amount of media hype, in fact the vast majority of coverage was very positive, there was very little context and caution thrown into the mixture at that early stage.  A huge reason for this media fanfare was the orchestrated PR campaign conducted by the authors and MAST Medical directors, not mutually exclusive groups as we have since discovered.  In fact one MAST Medical company director, Peter Hamlyn, was widely quoted in the media stating:

"This is vast. We are talking about probably half of all spinal surgery for back pain being replaced by taking antibiotics,"

He also claimed the research was worthy of a Nobel prize.  Strangely there was no mention in the media of Hamlyn's conflict of interests as a MAST company director, perhaps the media were told but just didn't mention it, who knows?  MAST Medical consists of two limited companies which were formed in 2010 and whose business depends on the results of the two Albert et al studies published this year in the European Spine Journal.  So two of the authors had clearly failed to state a clear conflict of interest relating to their roles as company directors of the MAST firms, and at this early stage it was not clear how much the Journal knew.

One might have thought that the ranting of a lone blogger and a very balanced piece in the BMJ by Margaret McCartney would not have ruffled the feathers of the authors and the Journal, however recent events have shown this appears not to be the case. I wrote to the journal to express my grave concerns that the two authors did not declare their blatant conflict of interest regarding MAST Medical, one would have thought that the Journal and authors would quickly apologise and correct the studies to include this conflict of interest.  Alas not, what we have seen has been a blustering piece of orchestrated obfuscation that wouldn't look out of place in a Kafka novel.

The Journal has come out with guns blazing and has refused to acknowledge that the authors' positions as company directors of firms whose business directly relates to the subject matter of the published studies are a conflict of interest.  It is strange as this stance from both the lead author and the Journal clearly contradicts their own guidance on conflicts of interest.  The Journal has not enforced its own guidelines by allowing the lead author's flimsy response to be published, while the Journal has published an editorial that has gone on the offensive:


".....the importance of this should not be lost amidst the negative, unstructured and unscientific response to this study....This mass media launch has led to a very widespread and negative feedback in the lay and specialist medical media. Obviously, this is an unregulated, opinionated repository for often extreme opinions, but there must be significant reputational risk both to the scientists, the commercial organisation in London and to the spinal community in general from some of these widely read internet resources (see http://ferretfancier.blogspot.co.uk/2013/05/antibiotics-for-back-pain-conflicts-of.html, and http://theconversation.com/zit-bacteria-causing-back-paina-spotty-hypothesis-14060). In addition, there has been a cynical response from the respected medical press [17] although a subsequent article gave a slightly more tempered response [18]."

It is utterly incredible that the editorial dares claim talk of 'reputational risk' to the authors and their businesses that stand to profit from their studies, as if this is some kind of smear campaign against them, rather the opposite in reality.  Strangely the editorial makes no mention of the MAST Medical PR campaign and its role in intentionally creating the media fanfare.  The authors and their companies have put their own reputations at risk because they have failed to declare a clear conflict of interest to the Journal, and now the Journal is stepping on some very dodgy ground indeed by backing up the unprofessional and unethical stance of the authors.  Their actions constitute a form of research misconduct and in order to attract attention away from this they are going on the attack, and trying to paint those who dare make objective criticism as crazed extremists.  

The lead author's response to my letter is nothing but obfuscation, she talks of a limited company as just an 'idea' and paints herself as a sorry victim of an 'attack'.  It is amusing that pointing out a clear undeclared conflict of interest is deemed an attack by Albert, it seems to me to be nothing other than fair objective criticism that she should properly address.   In reality the lead author is a MAST company director and is trying to take attention away from her failure to declare this as a conflict of interest by going on the attack.

This whole sorry sage demonstrates many important points.  Firstly digging a hole deeper is never a good option, it is far more sensible to admit an error when one is made.  Secondly the process of peer review is only as good as those managing it.  Thirdly those who go on the aggressive attack are often doing so because they are in a very weak defensive position.  I will not let this drop and it is important that the authors/Journal are not allowed to get away with this blatant refusal to declare a clear conflict of interest.  It is important for science that peer review is not abused in this manner, and to do this one must freely speak out the objective truth without fear.  Sadly this story looks set to run and run.

Tuesday 16 July 2013

Laughable response from 'antibiotics for back pain' author


These are the simple facts:

1. Two authors (Albert and Manniche) were named MAST company directors in 2010.

2. 'Antibiotics for back pain' study submitted to European Spine Journal in 2012.

3. No conflicts of interest declared in submission by both authors despite the fact that this company could potentially directly profit as a result of the study's findings.

The author's letter has just been published in the European Spine Journal and it states:


"The study was conducted from 2007 and the last one year control was carried out in late 2009. After the independent statistical analysis was done, the paper was written and then sent to the first Journal in 2010."

It is irrelevant when the study was sent to the first journal, the author is obfuscating in the extreme, it was submitted to the ESJ in 2012, 2 years after both authors were named MAST company directors.  The second justification for the undeclared conflicts of interest are:


"Since the company was nothing but ‘‘an idea’’ until after e-pub, there was no conflict of interest to declare."

Actually the company was more than 'idea', it was a proper limited company when the paper was submitted.  It is worth going back the the ESJ's guidelines on conflict of interest, it makes the author's claim look flimsy to say the least:


"Conflict (if none, “None” or describe financial interest/arrangement with one or more organizations that could be perceived as a real or apparent conflict of interest in the context of the subject of this article)"

The conflict is perceived, real and apparent in the context of the subject of the article.  The author does continue to obfuscate by digressing into details about the MAST company and try to paint herself as the 'disheartened' victim of an attack.  Sadly this is completely irrelevant, MAST is a company and she is a director of this company, and she should have declared this.  If the intentions of the company are benevolent, then why not simply declare it as the conflict of interest which it still is?

It is breathtaking, dumbfounding and utterly farcical that the author feels she can justify the failing to declare her clear conflicts of interest, she is clearly in breach of the ESJ's clear guidelines on this very topic.  The Editor's failure to enforce these guidelines and allow the publication of this laughable defence speaks volumes.  The Journal appears to value publicity over ethics and standards.  

The author and ESJ would have done better to simply apologise and admit the undeclared conflicts of interest.  Instead they have tried to justify the overtly unjustifiable, they have aggressively defended the blatantly incorrect, they have obfuscated and they continue to do all this.  Personally I am not impressed with this at all and it is precisely this kind of behaviour that makes an absolute mockery of peer review.  To be accused of 'attacking' the author for daring to point out a clear undeclared conflict of interest is rich, to say the least.  I am just pointing out clear objective facts which the author and European Spine Journal continue to ignore.

Sunday 14 July 2013

Antibiotics for back pain - the saga continues


A series of articles have appeared on the ESJ in response to the 'antibiotic for back pain' study by Albert et al.  There are three letters to the Editor, two of which are mine, and one of these points out the clear undeclared conflicts of interest of the authors.  There is also an editorial, it's worth reading yourself to reach an opinion of course, but it does appear to be a rather aggressive and blustering defence of the journal's stance and actions.  There are some sentences that simply don't make great sense:

"Against this background, it seems that a mass media launch in the United Kingdom, Europe, and North America with the story rapidly being taken up around the world, offering a ‘‘cure’’ for back pain propagated from a private clinic in London, should be considered unwise"

While the way in which the Editorial glosses over the conflict of interest issue is interesting, to say the least:

"One of the authors of the Southern Denmark papers has a commercial link with this organisation and addresses this conflict of interest elsewhere in this edition of the European Spine Journal."

This is factually wrong, actually 2 of the authors are named company directors of MAST Medical firms as I have stated on this blog.  The Editorial strangely doesn't  mention that one of the Albert/Manniche's business partners in the MAST MEDICAL companies, Peter Hamlyn, was directly involved in creating the media fanfare and mass hysteria:

"This is vast. We are talking about probably half of all spinal surgery for back pain being replaced by taking antibiotics" (Peter Hamlyn of MAST Medical in Guardian)

There is thus a complete failure of the Editorial to even notice that the massive media fanfare was instigated by the author/s and their very well rehearsed PR campaign, involving their business partners such as Peter Hamlyn.  The bizarre accusations levelled at bloggers like myself made me crack up:

"This mass media launch has led to a very widespread and negative feedback in the lay and specialist medical media. Obviously, this is an unregulated, opinionated repository for often extreme opinions, but there must be significant reputational risk both to the scientists, the commercial organisation in London and to the spinal community in general from some of these widely read internet resources (see ferret fancier, and the conversation). In addition, there has been a cynical response from the respected medical press [17]although a subsequent article gave a slightly more tempered response [18]."

The irony of the 'unregulated' and 'opinionated repository' is highly amusing given the completely unregulated freedom of peer reviewed journals to do precisely that!  Margaret McCartney is labelled 'cynical' for daring to question undeclared conflicts of interest!  I would like to personally thanks Margaret for her work on this as I feel without her pressure the Journal may well have buried a lot of this criticism. The Editorial simply tells us what is right near the end and again makes no mention of the authors' and MAST Medical's PR being the prime reason for the media fanfare that ensued:

"We believe that the surgical and scientific communities should have a tempered and objective response to these publications."

The overall point is simple.  Two authors were named company directors of firms that stood to directly profit from the results of research,and this was not declared when the articles were submitted, despite that fact that both authors were named company directors two years before the articles were even submitted!  The author/s have a lot of explaining to do and I struggle to see how they can dig their way out of this hole they have created for themselves.  The Journal also failed to pick up this undeclared conflict and appears to still be in denial on the issue.

The Journal has not behaved impressively either, this sort of aggressive defensive posturing doesn't do them any favours at all.  The Editorial contains basic factual errors and ignores the way in which the media fanfare was MAST Medical driven. The attempt to smear those, including myself, who have attempted to expose the truth is weak.  I suggest that the 'reputational risk' to the scientists is 100% self inflicted, the failure to declare serious conflicts of interest is a major professional failing that the Journal should be taking far more seriously.  The fact that these conflicting interests were undeclared had a major impact on the way in which the research was reported, and interpreted by doctors and patients alike.  This should be acknowledged at the very least.

ps I must also say that it is a remarkable coincidence that the Editorial is written by John O'Dowd and Adrian Casey, the latter of whom happens to work at the same private hospital at which Peter Hamlyn (MAST Medical Academy) also works!  Just chance I presume