|
Indian Journal of Surgery
Medknow Publications on behalf of Association of Surgeons of India
ISSN: 0972-2068
Vol. 66, Num. 1, 2004, pp. 13-14
|
Indian Journal of Surgery, Vol. 66, No. 1, Jan-Feb, 2004, pp. 13-14
Editorial
Time to learn from our mistakes?
Atul D. Garud
Director, Clinical Audit, Ethics and Quality Improvement,
P. D. Hinduja National Hospital and Research Centre,
Veer Savarkar Marg, Mahim, Mumbai - 400016, India.
Code Number: is04003
Last year, an unusual and unfortunate case was discussed at the
disciplinary hearings of the General Medical Council of the U.K.1 The
case related to a
32-year-old woman diagnosed as having acute abdominal pain
due to gallstones. The details of her clinical condition are unknown, but it
was reported that a decision to perform an emergency open cholecystectomy was
made by a locum Consultant Surgeon, who delegated the surgery to a second-year
registrar. During surgery, the registrar experienced some problems possibly
due to difficult anatomy and divided the hepatic artery, mistaking it for the
cystic artery. She then divided the common bile duct, and also inadvertently
nicked the portal vein. At that point, the patient had lost a little more than
a litre of blood, but the registrar managed to control the bleeding with a
clamp and a pack. The consultant arrived after this and took over. However,
in spite of his efforts, the patient, who had severe bleeding (about 5.5 litres),
died 3 days later.
Several questions arise from this unfortunate episode. Could
this occur at any hospital, anywhere? How do we deal with this? How can we
prevent it from happening again? What sort of safeguards do we have to protect
our patients? What lessons do we learn from this?
The easiest step, and one that must not be undertaken,
is to name, blame and shame the person/s responsible. That will not solve anything.
An individual is only the end product of a system. If the system has a faulty
design, the fault is likely to percolate down, and unless the faulty design
is rectified, the error is likely to occur again. The concept of `root cause
analysis' has its origin in this theory.
Healthcare is a complicated, high profile and risky profession.
There is a certain amount of inherent risk, especially in every surgical procedure
that we, as surgeons undertake. The degree of risk may vary with the specialty,
but there is an irreducible minimum, legitimate hazard in our day-to-day activity.
This is further compounded by the potential for errors at various stages of
a patient's work-up and treatment. It is little wonder, therefore, that surgeons
are constantly held responsible for adverse outcomes in patient care.
What, if anything, can be done to minimize these errors and adverse
outcomes? This question assumes special importance in the present climate of
heightened consumer activism and higher public expectation as a result of explosion
of knowledge and information on the Internet. Gone are the days when patients
thought `my doctor knows what is best for me', although this situation may
still be prevalent in our rural areas. Patients are becoming increasingly aware
of their rights, especially the right to make informed choices or decisions.
Undoubtedly, this is the way it should be.
Where does this leave our profession? Centuries ago, Hippocrates
warned us "Primum, non nocere!" (Above all, do NO HARM!) Despite
this stern admonition, operations have been performed on wrong patients, on
wrong parts, on wrong sides and worst of all, for wrong indications. In 1984,
the now well known and oft quoted Harvard study2 indicated that
nearly 4% of all hospital admissions end in adverse events. An Australian study
in 19953 showed an even higher incidence of adverse events at 16%.
These are alarming statistics indeed and one shudders to think what the figures
might be in our over- populated, understaffed and poorly funded hospitals in
India.
Leaving aside criminal negligence and deliberate acts of commission,
most errors are attributable to faulty system rather than faulty individuals.
This is not to deny the existence of human errors, but most psychologists and
those who have studied the epidemiology of errors feel that human errors are
the consequence of, and not the cause of, a flawed system design. The premise
that most humans are fallible at some point of time or another, and therefore,
the concept of a `fail-proof' system has worked well in the aviation industry.
Healthcare is not, and cannot be so mechanical operation theatres are not cockpits.
Otherwise, we run the risk of `dehumanizing' the delivery of healthcare to
our patients. This is a charge, which is often levied against the `high tech'
environment of the critical care units.
Epidemiological evidence has also shown that errors are likely
to happen more often when clinicians are inexperienced or when new procedures
are being
introduced. They are seen more frequently
in patients at extremes of age, in complicated illnesses, emergency situations
and in patients
with prolonged hospital stay.4 Factors like poor training, poor supervision,
adverse working conditions, which may lead to lack of motivation all may play
a part.
In order to minimize the incidence of errors, a comprehensive
strategy taking in to account all the factors listed above should be put in
to practice. There are several recommendationsbroadly classified as follows:
- Creating an awareness
- Learning the magnitude of the problem
- Incorporating remedial measures that are simple and workable
Guiding principle in all these remedial measures is based
on three pillars`non-punitive culture', full confidentiality and indemnity
from prosecution, except in cases of criminal negligence.
In this issue of the Journal, Drs. Bhattacharya and Catherine
have addressed the subject of reporting of surgical errors, wherein they discuss
the merits and demerits of voluntary and mandatory reporting. They rightfully
point out that mandatory reporting may be necessary for certain serious errors
such as perioperative death, but on the whole, a voluntary reporting system
which seeks to impart knowledge, improve patient care and evolve a change in
the system is more likely to succeed in the long run.
It is important to understand that reporting systems alone
will not bring about improvement. Information garnered from the reporting of
events has to be analyzed in such a way as to provide direction for a change
in the organisational process or system. Most errors occur as a result of some
departure from safe and `good' practice protocols. Such protocols have to be
set down taking in to account local needs and conditions. National Institute
of Clinical Excellence (N.I.C.E.) in the U.K. has drawn several practice guidelines
and protocols, which can be modified or adapted to our conditions.
There is a great deal of debate about the role of accreditation
in reducing errors. Drawing a parallel from
industry, accreditation process is an attempt
at continuous quality improvement. As a measure of compliance with the standards
set, it goes a long
way towards reducing the risk.
Where does this leave the patients- the so-called `victims'
of acts of omission facing the harm? Are they not entitled to any compensation?
This is a much broader question and will require a lot of thinking from the
government and the insurance companies. In a country such as the U.K. the National
Health Service has recently announced a compensation package of remedial care,
apologies and monetary compensation of up to £50,000 without the need
for litigation.5 The patients (or the relatives) have the option
of litigation, if they choose, but they will have to waive the litigation if
they accept the package.
In conclusion, once we accept the premise "To err is
human
" we also concede that `zero error' and `zero tolerance' to
error are both unattainable goals. What can certainly be done, however, is
to minimize the disastrous consequences by regular clinical audits, and a shift
of emphasis from blame to learning, from individuals to system and from fault
finding to fact finding. Above all, an environment and culture must be created
where it is safe to admit `I made an honest error, let me learn if I can prevent
it from happening again', and for the organization to revamp the system which
led to the error, without the spectre of blame and shame looming large.
Atul D. Garud
Director, Clinical Audit, Ethics and Quality Improvement,
P. D. Hinduja National Hospital and Research Centre,
Veer Savarkar Marg, Mahim, Mumbai - 400016, India.
REFERENCES
- Dyer O. Surgeons cleared of serious professional misconduct
by G.M.C. Br Med J 2002;325:408.
- Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers
AG, et al. Incidence of adverse events and negligence in hospitalized patients.
Results of the Harvard Medical Practice Study I. N Engl J Med 1991;324:370-6.
- Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby
L, Hamilton JD. The Quality in Australian health care study. Med J Aust
1995;163;458-71.
- Weingart SN, Wilson RM, Gibberd RW, Harrison B. Epidemiology
of Medical Error Br Med J 2001;320:774-6.
- Dyer C. NHS staff should inform patients of negligent acts.
Br Med J 2003;327:7.
© 2004 Indian Journal of Surgery.
|