Rich Winkel writes:
> According to Stathis Papaioannou:
> >Why would you not include the well-known fact that driving at high
> >speed is more likely to kill someone as "evidence"? If the driver
> >honestly did not know this, say due to having an intellectual
> >disability, then he would have diminished responsibility for the
> >accident.
>
> I don't know how you're using the term "responsibility", but in any
> case the issue is whether a driver is willing to slow down despite
> not seeing any obvious hazards.
Evidence isn't always obvious. Past experience shows that there might
be hazards around even though you can't see them, and you are being
irresponsible if you ignore this fact. The only excuse is if you genuinely
are unaware of this, in which case you have no reason to slow down if
you see no hazards.
> >Astronomy does not really have an ethical dimension to it, but most
> >other sciences do. Discovering that cyanide kills people is science;
> >deciding to poison your spouse with cyanide to collect on the
> >insurance is intimately tied up with the science, but it is not
> >itself in the domain of science.
>
> Precisely. Good medical research is science, but medical practice
> often involves matters of expedience, cultural bias, conflicts of
> interest and habit.
OK, but for the purposes of this discussion we should try to separate the
purely scientific facts from the rest. If the scientific evidence shows that
cyanide is good for headaches, and people die as a result, then perhaps
the scientists have been negligent, incompetent, or deceitful.
> >As for doing nothing often being the best course of action, that's
> >certainly true, and it *is* a question that can be analysed
> >scientifically, which is the point of placebo controlled drug trials.
>
> But of course if the research is never done or never sees the light of
> day, something other than science is going on.
Right, but we're getting away from the subject of epistemology and
onto the specifics of particular treatments and the evidence supporting
them. Personally, I have experience of several situations where I believed
that a new treatment would be helpful on the basis of the published
evidence but subsequently found, either through my own experience or
through new evidence coming to light maybe years later, that it caused
more harm than good. There is at least one example of a harmful drug
side-effect (olanzapine causing diabetes) that was so obvious to me that
it crossed my mind that adverse research findings may have been supressed;
on the other hand, I also have experience of treatments with well-documented
adverse effects which I never seem to encounter, and I don't surmise that
in those cases the data has been faked to make the drug look bad.
> >You are suggesting that certain treatments believed to be helpful
> >for mental illness by the medical profession are not in fact helpful.
> >You may be right, because the history of medicine is full of
> >enthusiastically promoted treatments that we now know are useless
> >or harmful. However, this is no argument against the scientific
> >method in medicine or any other field: we can only go on our best
> >evidence.
>
> I'm not arguing against the scientific method. I only wish medical
> science practiced it more often. It is unscientific to equate
> absence of evidence with evidence of absence.
Yes, and everyone is acutely aware that a new treatment may still be harmful
even though the present best evidence suggests that it isn't. This needs to be
taken into account in any risk-benefit analysis: that is, the "risks" equation
should include not only the weighted probability of known adverse events, but
also the weighted probability of as yet unrecognised adverse events. It is
difficult to quantify this latter variable, but it does play a part in making clinical
decisions, perhaps not always obviously so. For example, new treatments are
generally used more cautiously than older treatments: in the more severely ill,
in cases where the older treatments have failed, in lower dosages. As more
experience is gained, it becomes clearer whether the new treatment is in fact
better and safer than the old one, or better than no treatment at all, and it is
used more widely and more confidently.
It would be interesting to retrospectively analyse the incidence and severity of
adverse effects of medical treatments not suspected at the time of their initial
clinical use, allowing a quantitative estimate of the abovementioned weighted
probability for use in clinical decision-making. I don't know if this has ever been
attempted.
Stathis Papaioannou
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Fri Aug 18 2006 - 08:48:03 PDT