Wednesday, 8 October 2014

Open letter to the BOA president

Dear BOA President,

I am writing to express by disappointment at the BOA's recent move to force Orthopaedic surgeons into training to become BOA members by making BOA membership compulsory in order to sit the UKITE exam.

This move is a regressive and short sighted move that will alienate a whole generation of Orthopods from the BOA.  Instead of forcing trainees to join, you should be trying to work out why trainees are not joining and remedy this, rather than trying to aggressively force their hand.

Do not underestimate the harm you will do if you do not reverse this decision, I would strongly urge you to reconsider as the feeling amongst trainees on this issue is very strongly in disagreement with the BOA,

Kind regards

Ben Dean
Orthopaedic trainee

Friday, 13 June 2014

John Cook's responds but reliability issue is clearly a key study flaw

Here is John Cook's response to my questions:

"Re rater reliability and data breakdowns, we’ve released the raw individual ratings as well as the final ratings of all 11,944 papers at http://www.skepticalscience.com/tcp.php

Re how the endorsement levels were created, this was the result of a long, collaborative discussion between the authors - attempting to resolve the issue that different authors expressed endorsement of the scientific consensus in different ways. By allowing for different expressions of endorsement, it allowed us to have our cake and eat it too.

Re the proportion of studies providing primary evidence, we didn’t tag such papers - but it is an interesting idea worth exploring."
 
The reliability issue appears a big thorn in Cook's side.  In my opinion intra-rater and inter-rater reliability should have been published as part of the original study, the peer review should have picked this up and the fact that it didn't is worrying.
 
Cook's suggestion that the data is there for analysis is potentially misleading in my opinion.  Analysis is not possible without with days and days of re-working the data into a manageable format from which the reliability analysis could be performed.  Also I don't think the data is even there for calculating intra-rater reliability as it appears that raters did not re-rate the studies.
 
In my opinion the data should be provided in a format in which rater reliability can be easily calculated, this is clearly not the case as things stand.  Given that this is something fundamental to the original study and that should have been published in the first place, I find Cook's stance on the data rather unhelpful and the cynic could interpret this as Cook trying to hide the problem that is a far from reliable rating system.
 
At least Cook does acknowledge my point that it would be well worth exploring which studies actually provided strong primary evidence to back up their subjective opinion.  A study that reviewed the primary evidence for man-induced global warming would be far far more valuable than a study which simply surveys subjective opinion that may be based upon no meaningful evidence at all.

Thursday, 12 June 2014

Why won't John Cook reply to my simple questions?


Strangely or not so strangely, having been 'redirected' to send my letter's questions directly to the author, John Cook, I have received a response, but no attempt to respond to my queries about his study's methodology.  John Cook simply asked if I was one of many people 'that referred to Skeptical Science as "That Propaganda Site”?'.

I fail to see what my opinion on John Cook's website has to do with him answering some very simple questions about his study's methodology.  Perhaps Dr Cook should just answer the questions or is there a reason why he cannot?  Here are the questions that John Cook chooses to ignore for whatever reason:

"I read the study by Cook et al with great interest (1).  Firstly the study used levels of endorsement of global warming as outlined in their Table 2, however I can see no mention as to how these levels were created and how reliable they were in terms of both inter-rater and intra-rater reliability (Cohen’s kappa); would it be possible for the authors to clarify?  Secondly the authors ‘simplified the analysis’ by breaking down ratings into three groups, however they have not included the data breaking down the results into the original 7 categories: would it be possible to see this data?  Finally the study showed that 62.7% of all papers endorsed the consensus, but it does not mention how what proportion of these studies actually provided primary evidence to support the consensus: did the authors gather this information? "


Friday, 6 June 2014

Strange response from Environmental Research Letters on Cook et al's 97% paper


 
"I read the study by Cook et al with great interest (1).  Firstly the study used levels of endorsement of global warming as outlined in their Table 2, however I can see no mention as to how these levels were created and how reliable they were in terms of both inter-rater and intra-rater reliability (Cohen’s kappa); would it be possible for the authors to clarify?  Secondly the authors ‘simplified the analysis’ by breaking down ratings into three groups, however they have not included the data breaking down the results into the original 7 categories: would it be possible to see this data?  Finally the study showed that 62.7% of all papers endorsed the consensus, but it does not mention how what proportion of these studies actually provided primary evidence to support the consensus: did the authors gather this information? 

1.                  Cook J et al. Quantifying the consensus on anthropogenic global warming in the scientific literature. Environ. Res. Lett. 8 (2013) 024024. "
 
I wrote the above letter to ERL and received the following response:
 
"We regret to inform you that the Board Member has recommended that your article should not be published in the journal, for the reasons given in the enclosed report. Your manuscript has therefore been withdrawn from consideration."
 
The Board member states:
 
"The "methodological queries" is not a manuscript suited for publication at all, it simply is a set of questions to the authors of Cook et al. I would advise the author to pose these questions directly to John Cook, as is the normal procedure if someone has further questions about a publication - the corresponding author's contact address is provided with each paper on ERL."
 
I have sent the following response back to the Journal outlining why I feel the response from their Board member is grossly inadequate:
 
"I appreciate your response however the referee's comments are grossly inadequate in my opinion and I would like to request that my letter be reviewed by another reviewer who is independent to the Journal.
 
My questions relate to significant methodological flaws in the study by Cook and it is perfectly acceptable to submit this as a letter, so that such methodological flaws are discussed openly in a public forum.
 
I have written to several Journals and had several similar letters published in the past, I have never heard of this excuse for rejecting a submission.
 
Of note the reviewer did not even comment on the validity of my methodological questions, something I find rather strange.
 
I would appreciate a swift response to this letter"
 
This whole things appears rather fishy.  I have also emailed John Cook to see if he can answer my questions, for some reason I suspect I shall receive nothing back from him.  It is 'normal procedure' for many many scientific Journals to publish letters such as mine, which outline methodological concerns, so that this can be discussed openly and any problems with the study are noted in the public domain.  For some reason this ERL Board member doesn't want concerns about Cook's paper to be aired in public, can't imagine why?

Thursday, 5 June 2014

Telegraph gives away its nasty agenda

 
The above headline is a total disgrace.  The contaminated drips were supplied by a private firm, ITH Pharma, and the NHS is totally blame free in the poisoning of 15 babies.  However the Telegraph chose to cynically mislead readers and smear the NHS.  Of note this is not an isolated example, the Telegraph has been denigrating and smearing the NHS for a long time now, strangely the private sector never gets the same treatment, even when it kills babies.
 
I would urge everyone to complain to the PCC by using this link:
 
 
Feel free to use the text below in your complaint:
 
"i) The Press must take care not to publish inaccurate, misleading or distorted information, including pictures."
 
The headline is inaccurate and misleading.  The drips were used in the NHS but they were not manufactured by the NHS.  They were manufactured by a private firm, ITH Pharma.

The lazy and inaccurate headlines misleads readers and it directly implies that the NHS is at fault for the harm to the babies, when it is not in anyway to blame.

"ii) A significant inaccuracy, misleading statement or distortion once recognised must be corrected, promptly and with due prominence, and - where appropriate - an apology published. In cases involving the Commission, prominence should be agreed with the PCC in advance."
The Telegraph should issue a front page correction and apology as a lead article.

Of note the Telegraph clearly knows the headline was a mistake as it has already change the headline on its website to "One baby dead and 14 with blood poisoning from contaminated drip".  Alas the damage has been done with this inaccurate and misleading headline in the print edition.

                 

Wednesday, 28 May 2014

NICE throwing away millions of our money


Firstly I am no expert in weight management but I can analyse evidence.  I just wanted to outline the foundations of sand upon which the recently released NICE guidance is based.  NICE has released guidance on the management of obesity and they have summarised the evidence here, or should I say lack of evidence.  Even NICE admits the evidence is poor:

"...the studies tended to be small and with methodological limitations, providing little information on intervention setting and evaluating a fairly restricted range of interventions. In most RCTs, the methods of randomisation, allocation to treatment group and blinding of outcome assessors were inadequate or not possible to assess due to poor reporting"

Then I started to have a look through the studies that NICE has cited.  A large systematic review has demonstrated that 'little evidence supports the efficacy of commercial and self help weight loss programmes'

The only study that has shown any benefit at all to a commercial weight loss programme has major major flaws.  This study lost almost a third of its patients to follow up, so it has no idea what happened to their weights and health.  The study also assumed that 'participants who made no follow-up visits were assumed to remain at baseline value', this is generous at best, and at worst it may well be the real reason for the trial showing a 3% weight reduction in the 71% of patients who did manage to complete their follow up.  The data also showed that maximal weight gain is achieved early on, with all patients tending to increase their weight beyond the 26 week time point. 

Essentially all the trial showed was that the more motivated people tended to keep a small amount of weight of a two years, and this was probably independent of the commercial weight loss programme, it could easily have been down to selection bias.  In fact if they had managed to follow up the 29% that were lost, it is arguable that they were the least motivated and would have probably gained significant amounts of weight by the 2 year time point.

I conclude that the NICE guidance is going to result in a huge waste of public money on commercial weight loss programmes that do nothing of any benefit to anyone but their own company bank balances.   NICE does not have the evidence to back up its guidance as I have outlined above.  In the absence of good evidence, we should not be gambling with millions of tax payer's pounds that could be more effectively spent elsewhere.  After all until the government addresses our obesity-prone environment with holistic policies in non-medical areas, then society will simply continue to get fatter and fatter, as NICE pisses our money into the wind.

 

Friday, 25 April 2014

Letter to Niall Dickson


Niall Dickson has written in the BMJ and here is my response:
"The two year foundation programme was introduced in 2005 (as part of the Modernising Medical Careers programme) and has had broad support, reflected in Aspiring to Excellence (the report of John Tooke’s independent inquiry into Modernising Medical Careers) in 2008..."

I read this particular part of Niall Dickson's piece with great interest and am far from convinced that the Foundation programme has had broad support.  The Tooke review found many problems within Foundation training including the fact that a “sub analysis of the e-consultation response from 398 FY2 doctors revealed that 60% did not feel that the year had added value over and above further patient exposure”(1) and consequently recommended that “Foundation Year 2 should be abolished as it stands but incorporated as the first year of Core Specialty Training”.  Professor John Collins’ subsequent review of Foundation training in 2010 detailed numerous significant concerns including the “assessment of Foundation doctors is considered to be excessive, onerous and not valued’, and concluded that “the lack of an agreed purpose and of prospectively collected evaluative data made it difficult to accurately quantify how successfully the Foundation Programme is delivering against these objectives” (2).  A survey that I organised also demonstrated clear failings in the Foundation Programme including a lack of acute emergency exposure for FY1 trainees.  It appears strange that Niall Dickson equates the above with ‘broad support’.

I must add that the true motives of the Shape of Training Review are not known.  I have requested documentation from the GMC relating to the motives behind Professor Greenaway’s review under the Freedom of Information Act:

“Has the Chair of the review (Prof Greenaway) discussed the review with any ministers/civil servants? If so may I see the documentation of these meetings and who was involved?”

Strangely the GMC are blocking this request, using a public interest argument for withholding this vital information.  This is particularly strange for an organisation that claims as one of its five core organisational values “We are honest and strive to be open and transparent”.  The emerging consensus opinion of the medical profession appears to be that the Shape of Training Review is highly flawed and the public deserves to see all the information that may shed light on the true motivations behind such a review. 

1.       Tooke J. Aspiring to excellence: findings and final recommendations of the independent inquiry into Modernising Medical Careers. Jan 2008. www.medschools.ac.uk/AboutUs/Projects/Documents/Final%20MMC%20Inquiry%20Jan2008.pdf
2.       Collins J. Foundation for Excellence. October 2010.http://www.mee.nhs.uk/pdf/401339_MEE_FoundationExcellence_acc.pdf
     3.       Dean BJ, Duggleby PM. Foundation doctors' experience of their training: a questionnaire  study. JRSM Short Rep. 2013 Jan;4(1):5. doi: 10.1258/shorts.2012.012095. Epub 2013 Jan 14.

Wednesday, 23 April 2014

Letter to NICE on 'preventable' deaths due to acute kidney injury (AKI)



Dear NICE
 
I have a simple request relating to your press release:
 
"Around 20 per cent of emergency cases of AKI are preventable which would save around 12,000 lives each year in England."
 
I would like to know how you calculated this figure of 12,000 'preventable' deaths.
 
The best recent research estimates that in total there are only approximately 11,859 preventable deaths per year in the whole NHS.
 
I would be grateful if you could justify this 12,000 number for AKI or consider withdrawing it from your website and press release, as in my opinion it is inaccurate and potentially scaremongering,
 
Yours

Tuesday, 25 March 2014

Shape of Training: what is the GMC hiding?


I have made the following request to the GMC under the Freedom of Information act:

“Has the Chair of the review (Prof Greenaway) discussed the review with any ministers/civil servants?  If so may I see the documentation of these meeting and who was involved?”
 
The GMC have since replied:
 
"We have now considered your request under the Freedom of Information Act 2000 (FOIA). We are sorry for the delay in responding to you.
 
I can confirm that Professor Greenaway has had discussions with ministers or civil servants about the Shape of Training Review. We hold some information regarding these discussions. However, I must confirm that we are unable to provide this information to you under the FOIA. This is because we believe that the exemption listed at section 36(2)(b)(ii) of the FOIA applies."
 
I have consequently appealed to the ICO, who have turned down my request, citing the GMC's public interest defence.  This is strange given that back in 2009 the ICO stated:
 
“There is also a significant public interest in ensuring that the public are well informed about options being considered by the government so that they can fully understand the government’s reasoning behind the need to review the Consultant role. The withheld information would allow the public to engage in a constructive debate as to whether the reasons for the review as well as the options being considered have been properly weighed alongside the potential impact on health care services.”
 
It is strange that the ICO appears to be contradicting its judgement from 2009 and it is strange that the GMC appears to be so against 'openness and transparency' when one of its core 5 values is to 'strive to be open and transparent'.
 
The Shape of Training review appears a 'danger to patients' in the eyes of the BMA, and the GMC's refusal to release information that may very well reveal the true motives behind these reforms appears to be a total contradiction to its core value of 'openness and transparency'.  In my opinion the public interest argument to release this information easily trumps the public interest argument to withhold it.
 
I shall be writing a separate letter to Niall Dickson giving him a final chance to serve the public interest and strive to be open, as well as appealing this decision formally.  The truth will out, I hope, otherwise there isn't much hope for any of us involved in medical training.
 

Monday, 3 February 2014

My letter to BMJ on Tooke's defense of the Shape of Training

I read Professor Sir John Tooke’s recent editorial with great interest (1) and it is strange that he sees the Shape of Training review as a ‘broad consensus’ of opinion. Professor Sir John’s ‘MMC inquiry’ concluded that:

“The policy objective of postgraduate medical training is unclear. There is currently no consensus on the educational principles guiding postgraduate medical training. Moreover, there are no strong mechanisms for creating such consensus.”

This quote is as true and relevant today as when it was originally published in 2008. Subsequent to the publication of the MMC Inquiry, information released as a result of Freedom of Information (FOI) requests proved that the motives behind MMC were ultimately cynical (2):
 
“The advantage of creating a new structure for doctors coming through the new training is that it avoids having to renegotiate the contract with existing consultants ... which would be bitterly resisted.”

The GMC has recently refused a specific FOI request to release documentation pertaining to discussions that Professor Greenaway had with ministers or civil servants about the Shape of Training review. This information may help to clarify the genuine motives behind the Shape of Training review and the GMC’s decision is currently under appeal with the Information Commissioner’s Office. The GMC’s refusal to release such important information appears inconsistent with the motives behind the Shape of Training review being entirely well intentioned.

The Shape of Training review seems another cynical politically motivated disaster in the making, one that prioritises the short term needs of politicians over the short and long term needs of both doctors and patients. Not only do the genuine motives of the review remain unclear, but the specific details regarding implementation appear almost entirely absent for all specialities other than for women’s health, child health and mental health. Forgive me for daring to question the ‘consensus’, but the Shape of Training appears to be no such thing and the medical profession is right be remain guarded until all the specific details have been laid bare on the table.

1. John T. Postgraduate medical education and training in the UK. BMJ (Clinical research ed.) 2013;347.

 2. Remedy UK. http://www.remedyuk.org/index.php/blog/article/was-mmc-a-trojan-horse-fo.... Remedy UK website 2009;17th May.

Thursday, 19 December 2013

DH and Hunt are disingenuously spinning data


The NHS is in crisis because the current government have wasted billions in yet more corrupt market based reforms which will enrich private firms at the expense of the tax payer.  Sadly those in control, Jeremy and the Department of Health, don't want the truth to be revealed and are frantically trying to spin reality in their favour.  For example yesterday Hunt said on Twitter:

"This yr gave NHS more financial help than ever b4 -for 2900 more staff"

NHS spending is barely increasing and it is going up at a rate well below normal inflation, therefore in the real world NHS funding is being cut, year on year, and this is having the inevitable knock on effects on services, local providers are cutting services left, right and centre as a result.

The talk of 'increasing staffing levels' was echoed by another from the DH propaganda ministry, Tim Jones, the head of news at the DH:

"Oh, and er, 4,400 more clinicians since 2010"

Hunt, Jones and DH are engaged in a disingenuous game of tricking the public and misrepresenting the reality of what is going on in the NHS on the front line.  It is easy to cherry pick time points to pretend staffing is increasing as Hunt and Jones did yesterday, but the above graph shows that this is extremely misleading.  Staffing levels have been static in recent years, as evidence above and trying to pretend anything else is simply shameless political spin.  Here is a graph of the net change in staffing levels, showing that claims of a recent increase are highly disingenuous:


So here we have a situation whereby those in control are proving they cannot be trusted.  The reality on the ground shows crisis in emergency departments, staffing shortages and numerous service cuts.  Hunt and the DH cannot be trusted on the NHS, proving this government are on their last legs. 

Friday, 8 November 2013

Department of Health has no record of Brian Jarman's meeting with Jeremy Hunt


 
Firstly I do not agree with Brian Jarman on many things but I have the utmost respect for his integrity as a researcher and a human being.  Therefore I have no doubt then when he says the following, he is speaking the truth:
 
"I met Jeremy Hunt on the afternoon of Monday 15th July for about 40 minutes I'd estimate."
 
It is therefore very strange that when I emailed the Department of Health and asked them for their records of the above meeting under the Freedom of Information act, they replied:
 
"The Department of Health does not hold information relevant to your request."
 
This is strange.  Surely Jeremy Hunt should be recording what went on at such meetings, given he is an elected MP and the Secretary of State for health?


 

Tuesday, 29 October 2013

Why does my shoulder hurt?

The aim of this piece is to summarise my understanding of why things hurt and specifically, why the shoulder hurts. Obviously this is one monstrously huge topic and it is beyond the scope of a short blog piece to describe everything in minute detail. It is important to bear in mind that this is just one person’s opinion, and although I have tried to keep things as evidence based as possible, there are numerous huge gaps in our knowledge, it is therefore perfectly normal to become confused the more one delves into these great unknown areas. I hope this piece does confuse you, as the confusion has the potential to open your mind to appreciating these big unknowns, and perversely seeing all in black and white is probably a sign of not really understanding the topic.

 
Pain and its importance

Firstly pain is a very subjective entity and can be defined as:

“An unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage.”

This definition conveys just how both central peripheral systems are important in pain processing and pain development. It is vital to remember that pain is frequently a totally necessary and useful sensation. Pain after an acute injury reminds us to be careful and forces us to titrate our activity levels to a level that can be tolerated by the healing tissue. Interestingly an acute injury, such as after blunt trauma or a burn, the normal pain response involves an increase in pain sensitivity in both peripheral (peripheral sensitisation) and central (central sensitisation) pain processing (nociceptive) structures. The artificial division between peripheral and central is at the level of the spinal cord, with the sensitivity at the cord and above deemed ‘central’, while sensitivity peripheral to the cord is deemed ‘peripheral’.

Our nociception systems are not just simple afferent sensors of tissue damage, as once thought, meaning that higher systems such as emotion and mood can have powerful downward effects on the peripheral nervous system via descending modulatory circuitry. This descending circuitry consists of both direct neural routes and indirect endocrine routes such as the hypothalamic-pituitary axis. These descending systems may not only affect pain perception, but also peripheral tissue homeostasis and healing. The peripheral nervous system’s important afferent role in tissue homeostasis and healing has been relatively under researched and is consequently rather poorly understood, but it is well known the denervation is highly detrimental to the healing of both ligament and bone (http://www.ncbi.nlm.nih.gov/pubmed/12382964).

Pain may occur in the presence or absence of tissue damage, consequently there is a huge variability in the patterns of peripheral and central changes in patients presenting with shoulder pain. Some patients have significant tissue abnormality, for example a rotator cuff tear, with a very peripheral pattern of pain sensitivity, while others have no significant tissue abnormality and a very central pattern of pain sensitivity (http://www.ncbi.nlm.nih.gov/pubmed/21464489). It is also worth noting that many patients have significant tissue abnormalities without any pain or functional symptoms at all (http://www.ncbi.nlm.nih.gov/pubmed/10471998).

Pain and structure

Despite the high number of asymptomatic patients with rotator cuff tears, there is still a very strong relationship between pain symptoms and rotator cuff integrity. Rotator cuff tendinopathy (RCT) is the commonest ‘cause’ of shoulder pain and accounts for about three quarters of the patients presenting. Patients with rotator cuff tears are far more likely to have symptoms than those without (unpublished work by Oag et al), while previously asymptomatic patients are far more likely to develop symptoms as their tear size increase (http://www.ncbi.nlm.nih.gov/pubmed/16882890). It is unlikely that any one specific factor which precisely determines symptom development will ever be identified, pain is complex and many factors may contribute to its development. The glenohumeral joint kinematics change as tear size increases but this abnormality is equally prevalent in both symptomatic and asymptomatic patients (http://www.ncbi.nlm.nih.gov/pubmed/10717855,http://www.ncbi.nlm.nih.gov/pubmed/21084574).

Although the rotator cuff tendons are the most common ‘source’ of shoulder pain, damage to any structure may result in pain development, other common sources being the glenohumeral joint and capsule, the labrum, the acromioclavicular joint and the cervical spine. It can be incredibly difficult and sometimes impossible to pin point the exact ‘source’ of pain using history, examination and radiological investigations for a wide variety of complex reasons. The more a patient is highly ‘centrally sensitised’ the more difficult it can be to reach a specific diagnosis, this may be demonstrated by clinical features such as pain radiating all the way down the arm and tenderness to palpation over a wide area. It is important to realise that highly centralised patients may have a very treatable peripheral pathology in isolation, but is also vital to think of nerve entrapment in the cervical spine as part of one’s differential diagnosis.

History, examination findings including special tests and diagnostic injections (bursal injection or nerve root blocks) are all undoubtedly useful, but they can all be problematic due to a lack of specificity, for example a C6 nerve root block may abolish a patient’s ‘cuff tear related’ shoulder pain as a result of C6 innervating the shoulder joint. Both rotator cuff tears and cervical spine degeneration become increasingly common with age, meaning that many patients may well have dual pathology, and it may only sometimes be recognised that a significant contributing pathology has been missed after failed surgical intervention. Tissue abnormality can only explain so much in terms of symptomatology, and the fact that patients with massively different tissue abnormalities can have such different levels of symptoms is likely explained by individual variations in nociceptive processing, both peripherally and centrally.
Time and the placebo are great healers
The treatment of shoulder pain depends on a great number of factors including the patient’s specific diagnosis, the patient’s preferences and the treating clinician’s choice of strategy. As a natural cynic I have become increasingly convinced that many widely used treatments have little efficacy beyond the placebo effect. The placebo effect is incredibly powerful and useful in augmenting any particular treatment strategy, the patient’s belief and expectation can be huge drivers for symptomatic improvement, whether or not a treatment has a significant ‘treatment effect’.

The body has a tremendous capacity for dampening down pain over time, and amazingly this process is hardly understood at all. Huge numbers of patients with significant symptoms simply get better without any intervention, despite there being no improvement or a significant deterioration in their structural tissue abnormality. Therefore many simply never consult their primary care clinician or their secondary care specialist. Many patients do not ‘get better’ but are reasonably content to live with a certain level of pain and disability, after all this is part of what normal ageing entails; we have to be realistic about what can be achieved and setting real world patient expectations is an important part of any consultation.

It is certainly beyond this short piece to bore you with all the evidence for all the possible treatments for all the possible diagnoses of shoulder pain. What is of value is remembering that every individual is different, with a unique pattern of peripheral tissue changes, combined with certain highly variable peripheral and central nervous system changes, and that these drivers of pain symptomatology are well worth considering when embarking upon any particular treatment plan. Everything we do is imperfect; diagnosis is fraught with problems with sensitivity and specificity, we have no cures for the age-related disorders that result in so much pain and disability in those they affect. First we should try not to do any harm, and secondly we should try to appreciate that pain is a very complicated and confusing entity.

Further reading

A summary of the key mechanisms involved in shoulder pain can be found here (http://www.ncbi.nlm.nih.gov/pubmed/23429268) at the British Journal of Sports Medicine.

Friday, 20 September 2013

Professor Jarman's reliance on the c-statistic

There is so so much more to this UK/US HSMR comparison that first meets the eye, at first glance there were some significant flaws and assumptions made, but the more digging I have done, the more gaps I seem to be finding in certain peoples' robust conclusions.  Many thanks to Prof Jarman for the links and data he has provided so much, it is well worth while reading up on the methodology behind HSMRs in the first instance.  One thing it is vital to understand is the so-called 'c-statistic' which is a basic measure as to how well a model fits the mortality data for a particular group of patients:

"The success of case-mix adjustment for accurately predicting the outcome (discrimination) was evaluated using the area under the receiver operating characteristic curve (c statistic). The c statistic is the probability of assigning a greater risk of death to a randomly selected patient who died compared with a randomly selected patient who survived. A value of 0.5 suggests that the model is no better than random chance in predicting death. A value of 1.0 suggests perfect discrimination. In general, values less than 0.7 are considered to show poor discrimination, values of 0.7-0.8 can be described as reasonable and values above 0.8 suggest good discrimination."

and

"As a rank-order statistic, it is insensitive to systematic errors in calibration"

The second quote is particularly salient, as one of the key flaws in Professor Jarman's US/UK comparison may be that there were systematic differences between the admissions policy and coding in the different countries, this would not be detected by the c-statistic.

It is then interesting to look at Dr Foster's data concerning the c-statistics they have obtained for the clinical conditions that they use to determine the HSMRs of UK hospitals.  Dr Foster routinely uses 56 diagnostic groups (contributing towards about 83% of UK deaths) and they are defined according to the ICD codes.  Of note much of Prof Jarman's UK/US comparison only used 9 diagnostic codes,  which is a little strange in itself, why not use all 56 diagnostic codes?  These 9 codes covered less than half of the deaths in both countries.  I have listed the codes used the their individual c-statistics based on Dr Foster's 2012 report:

Septicaemia - 0.792 (reasonable)
Acute MI - 0.759 (reasonable)
Acute heart failure - 0.679 (poor)
Acute cerebrovascular event - 0.729 (reasonable)
Pneumonia - 0.838 (good)
COPD/Bronchiectasis - 0.714 (low end of reasonable)
Aspiration pneumonitis -0.711 (low end of reasonable)
Fractured Hip - 0.756 (reasonable)
Respiratory failure - 0.745 (reasonable)

These c-statistics are not very impressive, in must be remembered that 0.5 is effectively the zero, and many of these c-statistics are around the low end of reasonable.  It is interesting that Professor Jarman quotes the overall c-statistic for his 9 code model as being 0.921.  Given that he individually compared each HSMR for each code, surely he should be giving the individual c-statistics for each country's subgroup for each specific code?  Professor Jarman has not provided this data, it would certainly be interested to see the c-statistics for the UK and the US for each code, to see if there was a relationship between c-statistic disparity and HSMR disparity.

It is also interesting that 75 of 247 of Prof Jarman's mortality models failed their statistical Goodness of Fit tests.  The measure of how well the models generally fit is also pretty poor (mean Rsquared of 0.25).  It must also be reiterated that the c-statistic will not pick up errors in calibration, so if one country is systematically up-coded relative to another, then the c-statistic will not detect this. The one key question I would like to see answered, is just how did Professor Jarman select these 9 codes for the US/UK comparison?  There are also other questions, like which models failed the Goodness of Fit tests and did the codes assessed have reasonable r-squared values?  There is so much beneath the surface here, I am convinced this story will run and run.

Thursday, 19 September 2013

HSMRs, coding, big assumptions and implausible conclusions....


The second major flaw in Prof Brian Jarman's UK/US mortality comparison is that it assumes HSMRs are reliable and coding is equivalent in the UK and US.  If either of these assumptions is false then the comparison is definitely on extremely dubious foundations.   So firstly to HSMRs, one can certainly do far worse than to read this summary by Prof Speigelhalter:

"The two indices often come up with different conclusions and do not necessarily correlate with Keogh’s findings: for example, of the first three trusts investigated, Basildon and Thurrock was a high outlier on SHMI but not on HSMR for 2011-12(and was put on special measures), Blackpool was a high outlier on both (no action), and Burton was high on HSMR but not on SHMI (special measures). Keogh emphasised “the complexity of using and interpreting aggregate measures of mortality, including HSMR and SHMI. The fact that the use of these two different measures of mortality to determine which trusts to review generated two completely different lists of outlier trusts illustrates this point." It also suggests that many trusts that were not high on either measure might have had issues revealed had they been examined."

HSMRs and SHMIs are not useless but they are far from perfect, even when monitoring the trend of one individual hospital's mortality over time.  They are more problematic when comparing hospitals, as variations in coding can have huge effects on the results.  This BMJ article highlights many more concerns over the use and validity of HSMRs, while there are many excellent rapid responses which also highlight many more associated problems.  Here are some other studies outlining problems with the reliability and/or validity of HSMRs ( paper 1, paper 2, paper 3).  This particular segment highlights a huge flaw in HSMRs and in Jarman's UK/US comparison:

"The famous Harvard malpractice study found that 0.25% of admissions resulted in avoidable  death. Assuming an overall hospital death rate of about 5% this implies that around one in 20 inpatient deaths are preventable, while 19 of 20 are unavoidable. We have corroborated this figure in a study of the quality of care in 18 English hospitals (submitted for publication). Quality of care accounts for only a small proportion of the observed variance in mortality between hospitals. To put this another way, it is not sensible to look for differences in preventable deaths by comparing all deaths."

This is the crux of it, meaning that a 45% difference in acute mortality as a result of poorer care is utterly implausible.  Now to coding, there is a long history of up-coding in the US and inadequate incomplete coding in the UK.   Here is one of the best papers proving that HSMRs is hugely affected by admission practices and coding, something that Prof Jarman seems to be unwilling to consider having any effect on the US/UK HSMR difference:

"Claims that variations in hospital standardised mortality ratios from Dr Foster Unit reflect differences in quality of care are less than credible."

I think this applies to Prof Jarman's latest conclusions on US/UK HSMRs, in my opinion his conclusions are less than credible also.  There is also an excellent NEJM piece on the problems with standardised mortality ratios, rather a reliable and eminent journal.  There is one recurrent theme in the academic literature and it appears to me that HES data and standardised mortality ratios are not reliable.  Another recurring theme is that the one person defending them often seems to be a certain Professor Brain Jarman, read into that what you will.  


There are so many problems with Professor Jarman's work and conclusions that it is hard to sum it up in one piece, really one needs a whole book.  Firstly the underlying HES data is unreliable, secondly HSMRs are not reliable and are highly subject to different coding practices/admission practices, thirdly the US and UK are highly likely to be at opposite ends of the spectrum in terms of coding (up versus down) and are also likely to have extremely different admission/discharge practices, fourthly the differences in UK/US HSMRs being down to care is utterly implausible, and fifthly the UK's massive mortality improvements over the last decade are also utterly implausible.  It appears Professor Jarman has unsuspectingly scored an own goal, the HSMR has revealed itself with such implausible results.

Monday, 16 September 2013

Many big assumptions have been made: 1. HES Data


The US/UK HSMR comparison made by Prof Brian Jarman is continuing to rumble on and probably not for reasons that will please the Professor.  Since the leaking of the data to the media a few days back, several astute observations have been made that cast great doubt upon Prof Jarman's conclusions  This is my take on events and my summary of the problems with Prof Jarman's stance, this is part 1 on HES data:

HES data is of low quality and is unreliable (citation 1,citation 2, citation 3, citation 4, citation 5)

"Concerns remain about the quality of HES data. The overall percentage of admissions with missing or invalid data on age, sex, admission method, or dates of admission or discharge was 2.4% in 2003. For the remaining admissions, 47.9% in 1996 and 41.6% in 2003 had no secondary diagnosis recorded (41.9% and 37.1%, respectively, if day cases are excluded). In contrast to some of the clinical databases, if no information on comorbidity is recorded, we cannot tell whether there is no comorbidity present or if comorbidity has not been recorded. Despite these deficiencies, our predictive models are still good. In the most recent report of the Society of Cardiothoracic Surgeons, 30% of records had missing EuroSCORE variables. Within the Association of Coloproctology database, 39% of patients had missing data for the risk factors included in their final model. A comparison of numbers of vascular procedures recorded within HES and the national vascular database found four times as many cases recorded within HES."

and this from a letter by authors including Bruce Keogh:

"This latest study raises more concerns about hospital episode statistics data, showing that errors are not consistent across the country."

and this from Royal College of Physicians,as recently as 2012:

"This change is necessary because the current process for the collection of data for central returns that feed HES, and the content of the dataset itself, are both no longer appropriate for the widening purposes for which HES are used. "

I can find little to support Brian Jarman's stance claiming that HES data is accurate and reliable.  Prof Jarman's study relies massively on HES data, as it is this very data from which his HSMRs are calculated.  It would be fascinating if Prof Jarman could produce some published evidence to support his stance on HES data.  If the UK data is less complete than the US data, it could well lead to a massive difference in HSMRs that is nothing to do with care standards, but that is purely down to data quality.  The HSMR is another part all in itself.  

Thursday, 12 September 2013

Channel 4's 'UK/US hospital mortality' story is based on Jarman's sandy foundations


Last night I sat down to watch the Channel 4 news and was deeply upset by what was then to follow.  The 'news' exclusive on the 'increased mortality' in UK hospitals versus those in the US was presented as if the data to prove this theory was robust, there was no discussion of the huge flaws in the methods used and the panel discussion was completely one sided.  Channel 4 gave no neutral academics a chance to speak and they gave no one the chance to defend the NHS.  It was trial and execution by a biased one man band, it was shoddy journalism at its worst, it was very disappointing and very unlike Channel 4's normally excellent coverage.  I shall be factual and careful with what I shall say next, as it appears some cannot listen to criticism of their pet methodologies without resorting to threats of GMC referral, not the sign of a robust argument I would say.

The story claimed that patients was 45% more likely to die in the UK than the US, when admitted with acute illness such as pneumonia and septicaemia.  This was based on 'research' done by Professor Brian Jarman, he of Dr Foster fame and a big supporter of the HSMR tool (Hospital Standardised Mortality Ratio).  It must be noted that Dr Foster are rather close to the Department of Health, with the latter being obliged to promote the business interests of Dr Foster 'intelligence'.  It is worth reading about the rather cosy relationship involving Jarman, Dr Foster and the government, because it puts Jarman's potential motives into the open, conflicts of interest are often key in understanding such matters.

 Essentially the Channel 4 story was based upon several assumptions that look rather naive, flawed and ignorant to anyone that has a basic grasp of scientific evidence and statistics.  Firstly the UK mortality data is based upon HES (Hospital Episode Statistics) data, which is notoriously inaccurate and unreliable.  For example in the recent past HES data showed there were 17,000 pregnant men in the UK, a truly unbelievable statistic.  There is also abundant evidence showing that HSMRs themselves are very poor tools, even for comparing hospitals within the same country, let alone different continents.  HSMRs are crude tools and many academics feel their use should be abandoned entirely.

"Nonetheless, HSMRs continue to pose a grave public challenge to hospitals, whilst the unsatisfactory nature of the HSMR remains a largely unacknowledged and unchallenged private affair."

The above quote shows exactly what the problem is, here we have a flawed and scientifically dubious measure being used when it should not be used, as a political bandwagon and some vested interests are running out of control.  Brian Jarman's baby is the HSMR and he is a powerful man with a large sphere of influence.  It is notable that even the Keogh review was rather critical of the use of HSMRs and the way in which they had been inappropriately use to create divisive anti-NHS propaganda.  There are so many things that can change the HSMR, and many are absolutely nothing to do with the actual quality of care provided.

In simple terms if you put poor quality in, then you get dubious data out and will reach dodgy flawed conclusions.  Firstly HES data, on which the UK mortality rates are based, is poor and this means that mortality cannot be adequately adjusted for confounding factors.  The data is so inaccurate that information about illness type, severity of illness and co-morbidities cannot be adequately adjusted for in the statistical modelling.  The way data is coded differently in the US and UK is also likely to have a massive effect on the results.  Generally HES data is poor and patients are under-coded, ie their illness and co-morbidities are not coded to be as bad as they actually are.  However the exact opposite is true in US coding, they have a vast army of bureaucrats and have had a marketised system for decades, meaning that over-coding is rife, ie hospitals exaggerate the illness and co-morbidities of patients in order to increase their revenues.  There have also been huge issues with the accuracy of the US data.  There is also a huge volume of evidence showing that over-charging as a result of over-coding has been rife in the US, estimates put the cost of over-coding in the multi billion dollar ball park.

Overall this is a case study in dodgy data and dodgy journalism leading to what appears a hugely erroneous conclusion.  Brian Jarman has been reckless and irresponsible in my opinion, he has tried to justify his leaking of unpublished data but his squirming doesn't cut the mustard for me.  In honesty Brian Jarman simply doesn't know why the HSMRs are so different in the UK and US, interestingly the UK outperformed the US on fractured hip mortality, a fact sadly ignored by the news and not mentioned by Jarman in his interview.  This is also rather salient as hip fractures are one of the few things coded accurately in the UK, as it has been attached to a specific tariff for years, again yet more evidence that coding is responsible for the differences in HSMRs and not care quality.  Overall the likely explanation for the differences is the huge discrepancies in terms of coding practice between the US and the UK, in the US they tend to over-code and in the UK we tend to under-code. The huge effects of coding and admission practices on HSMRs have been well made in the BMJ in the past.  There is also the fact that many US patients are discharged from acute hospitals to die elsewhere, and Brian Jarman has admitted this hasn't been taken into account in his calculations.

Brian Jarman should not be leaping to such conclusions and he should be publishing in a reputable journal before recklessly leaking such propaganda to the media.  I wonder if Brian Jarman has done any work at all to investigate the differences between coding in the US and the UK, or did he simply say what he was nudged towards by the Department of Health?  I also wonder if Brian Jarman can claim his data is adequately corrected for confounders when it is based on unreliable and inaccurate HES data?  Personally speaking I have lost all respect for Brian Jarman and am now highly suspicious of the cosy nepotistic little relationship involving the government, Dr Foster and Brian Jarman.  It is just extremely sad that Victoria McDonald was played and used in such a game.  This whole episode has shown very clearly that one thing the NHS needs is good independent statistics and high quality data, and Dr Foster is definitely not going to be part of a progressive solution.

Friday, 2 August 2013

MAST cosying up with the chiroquacktic community

It is interesting that MAST Medical are making such efforts to spread the results of their research within the chiropractic community.  The AECC (Anglo-European College of Chiropractic) strangely posted the above page commenting on MAST treatment and suddenly removed it from their website, without explanation.   The full page can be accessed here.  Amongst the interesting quotes are the following:

"Groundbreaking new research has shown that up to 40% of people suffering from chronic lower back pain could now be cured from a simple and inexpensive course of antibiotics........Neil Osborne, Director of Clinic at the College is one of the first practitioners in the UK to undergo a Modic antibiotic spine therapy (MAST) course which puts him at the forefront of this new procedure......The new treatment could save the UK economy billions of pounds"
 
It reads like an uncritical press release for MAST.  The 40% figure is a massive over-exaggeration of reality and the 'billions' saved is also pie in the sky stuff.  Neil Osborne is now at the 'forefront' of this new 'procedure' apparently, or rather he has parted with some cash to be told about the a trial by the very researchers who carried out the trial!  The British Association of Spinal Surgeons have released some far more sensible information on this antibiotics for back pain story:
 
"BASS considers this to be a well conducted trial which provides evidence that a small number of patients could gain some moderate improvement in their condition with a course of antibiotics." 
 
This is a far more honest and representative opinion of the research of Albert et al.  It is also rather fascinating that Hanne Albert, the lead author who thinks that a conflict of interest is not a conflict of interest, is speaking at the McTimoney chiropractic conference this November.  One must remember the track record of the chiropractic profession in plugging unproven and sometimes dangerous treatments, ignoring scientific evidence, suing their critics, exploiting patients with misinformation and indulging in general quackery.  Have a quick peek at these articles on McTimoney too.
 
The fact that MAST are cosying up to the chiropractic profession says a lot.  It seems MAST and the chiropractic profession have rather a lot in common.  It is deeply inappropriate and unethical that MAST are plugging antibiotic treatment for back pain in such a manner.  Firstly practice should not change on the basis of one trial that has not been replicated independently, especially when two of the authors failed to declare their significant conflicting business interests to the publishing Journal.  Secondly the way in which MAST Medical are effectively advertising their services and seemingly exaggerating the potential gains of treatment appears rather cynical to me.  I would not touch MAST Medical with a bargepole until their research has been independently replicated by a trustworthy research group.
 

Wednesday, 31 July 2013

Formal complaint letter sent to European Spine Journal

 
Dear Editor
I am writing as regards the recent publications by Albert et al (1, 2) in the European Spine Journal (ESJ).   These are the undisputable facts of the case:
-          Both Albert et al studies were submitted and all authors declared no conflict of interest in 2012 (1, 2)
-          Two authors (Albert and Manniche) are company directors of both ‘MAST MEDICAL EDUCATIONAL SERVICES LIMITED’ and ‘MAST MEDICAL CONCEPT LIMITED’
-           ‘MAST MEDICAL EDUCATIONAL SERVICES LIMITED’ and ‘MAST MEDICAL CONCEPT LIMITED’ were both incorporated in April 2010
-          The ESJ’s guidelines on conflict of interest state “Conflict (if none, “None” or describe financial interest/arrangement with one or more organizations that could be perceived as a real or apparent conflict of interest in the context of the subject of this article)"
 
These are some undisputable facts from the recent correspondence in the ESJ:
 
-          The Journal has been made aware of the undeclared conflicts of interest of two authors (Albert and Manniche) relating to the two studies  (as stated (3))
-          The ESJ has allowed the lead author to publish a response to the ‘ No conflict of interest?’ letter which refuses to acknowledge that being a director of a limited company whose work directly relates to the results of the studies is a conflict of interest (“there was no conflict of interest to declare” (4) )
-          The ESJ Editor has not acknowledged that the authors have failed to declare their conflicts of interest (5)
 
Therefore in conclusion I would like to lodge a formal complaint to the ESJ because of the failure to enforce your own clear guidelines on conflicts of interest.  This has been manifest by:
 
1.      The ESJ has issued no corrections to the two studies to include the conflicts of interest of the two authors (Albert and Manniche)
2.      The lead author has published a peer reviewed letter  in the ESJ denying that a conflict of interest is a conflict of interest
3.      The Journal has not at any point (Editorial by Editor and invited Editorial) acknowledged that the two authors failed to declare these conflicts of interest
 
If the ESJ is of the opinion that two authors being MAST company directors does not constitute a conflict of interest, then I would be interested to see how this stance could possibly be justified using the Journal’s guidelines?
 
I would like to make it clear that unless adequate action is now taken by the Journal, I shall be taking this case to COPE,
 
Kind regards