Monday, June 22, 2009

Diffusion MRI

PROCEDURE OF THE DAY

Diffusion MRI

Diffusion MRI is a magnetic resonance imaging (MRI) method that produces in vivo images of biological tissues weighted with the local microstructural characteristics of water diffusion. The field of diffusion MRI can best be understood in terms of two distinct classes of application - Diffusion Weighted MRI and Diffusion Tensor MRI.

In Diffusion Weighted Imaging (DWI), each image voxel (three dimensional pixel) has an image intensity that reflects a single best measurement of the rate of water diffusion at that location. This measurement is far more sensitive to early changes after a stroke than more traditional MRI measurements such as T1 or T2 relaxation rates. DWI is most applicable when the tissue of interest is dominated by isotropic water movement - the diffusion rate appears to be the same when measured along any axis.

Diffusion Tensor Imaging (DTI) is important when a tissue - such as the neural axons of white matter in the brain or muscle fibers in the heart - has an internal fibrous structure analogous to the anisotropy of some crystals. The result is that water will diffuse more rapidly in the direction aligned with the internal structure and more slowly as it moves perpendicular to the preferred direction. This also means that the measured rate of diffusion will differ depending on the direction from which an observer is looking. In DTI, each voxel therefore has one or more pairs of parameters: a rate of diffusion and a preferred direction of diffusion - described in terms of three dimensional space - for which that parameter is valid. The properties of each voxel of a single DTI image is usually calculated by vector or tensor math from six or more different diffusion weighted acquisitions, each obtained with a different orientation of the diffusion sensitizing gradients. In some methods, hundreds of measurements - each making up a complete image - are made to generate a single resulting calculated image data set. The higher information content of a DTI voxel makes it extremely sensitive to subtle pathology in the brain. In addition the directional information can be exploited at a higher level of structure to select and follow neural tracts through the brain - a process called tractography. [1][2]

A more precise statement of the image acquisition process is that, the image-intensities at each position are attenuated, depending on the strength (b-value) and direction of the so-called magnetic diffusion gradient, as well as on the local microstructure in which the water molecules diffuse. The more attenuated the image is at a given position, the more diffusion there is in the direction of the diffusion gradient. In order to measure the tissue's complete diffusion profile, one needs to repeat the MR scans, applying different directions (and possibly strengths) of the diffusion gradient for each scan.

Traditionally, in diffusion-weighted imaging (DWI), three gradient-directions are applied, sufficient to estimate the trace of the diffusion tensor or 'average diffusivity', a putative measure of edema. Clinically, trace-weighted images have proven to be very useful to diagnose vascular strokes in the brain, by early detection (within a couple of minutes) of the hypoxic edema.

More extended diffusion tensor imaging (DTI) scans derive neural tract directional information from the data using 3D or multidimensional vector algorithms based on three, six, or more gradient directions, sufficient to compute the diffusion tensor. The diffusion model is a rather simple model of the diffusion process, assuming homogeneity and linearity of the diffusion within each image-voxel. From the diffusion tensor, diffusion anisotropy measures such as the Fractional Anisotropy (FA), can be computed. Moreover, the principal direction of the diffusion tensor can be used to infer the white-matter connectivity of the brain (i.e. tractography; trying to see which part of the brain is connected to which other part).

Recently, more advanced models of the diffusion process have been proposed that aim to overcome the weaknesses of the diffusion tensor model. Amongst others, these include q-space imaging and generalized diffusion tensor imaging.


Bloch-Torrey Equation


In 1956, H.C. Torrey mathematically showed how the Bloch equation for magnetization would change with the addition of diffusion.[3] Torrey modified Bloch's original description of transverse magnetization to include diffusion terms and the application of a spatially varying gradient. The Bloch-Torrey equation neglecting relaxation is:

\frac{dM_+}{dt}=-j \gamma \vec r \cdot \vec G M_+ + \vec \nabla^T \cdot \vec {\vec D} \cdot \vec \nabla M_+

For the simplest case where the diffusion is isotropic the diffusion tensor is

\vec {\vec D} = D \cdot \vec I = D \cdot \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix},

which means that the Bloch-Torrey equation will have the solution

M_+(\vec r,t)=M_0e^{-\frac{1}{3}D\gamma ^2G^2t^3}e^{-j\gamma \vec r \cdot \int_0^tdt' \vec G(t')}.

This demonstrates a cubic dependence of transverse magnetization on time. Anisotropic diffusion will have a similar solution method, but with a more complex diffusion tensor.

Diffusion-weighted imaging

Diffusion-weighted imaging is an MRI method that produces in vivo magnetic resonance images of biological tissues weighted with the local characteristics of water diffusion.

DWI is a modification of regular MRI techniques, and is an approach which utilizes the measurement of Brownian motion of molecules. Regular MRI acquisition utilizes the behaviour of protons in water to generate contrast between clinically relevant features of a particular subject. The versatile nature of MRI is due to this capability of producing contrast, called weighting. In a typical T1-weighted image, water molecules in a sample are excited with the imposition of a strong magnetic field. This causes many of the protons in water molecules to precess simultaneously, producing signals in MRI. In T2-weighted images, contrast is produced by measuring the loss of coherence or synchrony between the water protons. When water is in an environment where it can freely tumble, relaxation tends to take longer. In certain clinical situations, this can generate contrast between an area of pathology and the surrounding healthy tissue.

In diffusion-weighted images, instead of a homogeneous magnetic field, the homogeneity is varied linearly by a pulsed field gradient. Since precession is proportional to the magnet strength, the protons begin to precess at different rates, resulting in dispersion of the phase and signal loss. Another gradient pulse is applied in the same direction but with opposite magnitude to refocus or rephase the spins. The refocusing will not be perfect for protons that have moved during the time interval between the pulses, and the signal measured by the MRI machine is reduced. This reduction in signal due to the application of the pulse gradient can be related to the amount of diffusion that is occurring through the following equation:

\frac{S}{S_0} = e^{-\gamma^2 G^2 \delta^2 \left( \Delta - \delta /3 \right) D} = e^{-b D}\,

where S0 is the signal intensity without the diffusion weighting, S is the signal with the gradient, γ is the gyromagnetic ratio, G is the strength of the gradient pulse, δ is the duration of the pulse, Δ is the time between the two pulses, and finally, D is the diffusion constant.

By rearranging the formula to isolate the diffusion-coefficient, it is possible to obtain an idea of the properties of diffusion occurring within a particular voxel (volume picture element). These values, called apparent diffusion coefficient (ADC) can then be mapped as an image, using diffusion as the contrast.

The first successful clinical application of DWI was in imaging the brain following stroke in adults. Areas which were injured during a stroke showed up "darker" on an ADC map compared to healthy tissue. At about the same time as it became evident to researchers that DWI could be used to assess the severity of injury in adult stroke patients, they also noticed that ADC values varied depending on which way the pulse gradient was applied. This orientation-dependent contrast is generated by diffusion anisotropy, meaning that the diffusion in parts of the brain has directionality. This may be useful for determining structures in the brain which could restrict the flow of water in one direction, such as the myelinated axons of nerve cells (which is affected by multiple sclerosis). However, in imaging the brain following a stroke, it may actually prevent the injury from being seen. To compensate for this, it is necessary to apply a mathematical operator, called a tensor, to fully characterize the motion of water in all directions. This tensor is called a diffusion tensor: \bar{D} = \begin{vmatrix} D_{xx} & D_{xy} & D_{xz} \\ D_{xy} & D_{yy} & D_{yz} \\ D_{xz} & D_{yz} & D_{zz} \end{vmatrix}

Diffusion-weighted images are very useful to diagnose vascular strokes in the brain. Diffusion tensor imaging is being developed for studying the diseases of the white matter of the brain as well as for studies of other body tissues (see below).

Diffusion Anisotropy indices

There are various coefficients used for estimating the anisotropy from the diffusion matrix. Here is a list of a few of them:

Fractional Anisotropy (FA)
\sqrt{3[(\lambda_1-\langle\lambda\rangle)^2+(\lambda_2-\langle\lambda\rangle)^2+(\lambda_3-\langle\lambda\rangle)^2] \over 2(\lambda_1^2+\lambda_2^2+\lambda_3^2)}

Diffusion tensor imaging

Diffusion tensor imaging (DTI) is a magnetic resonance imaging (MRI) technique that enables the measurement of the restricted diffusion of water in tissue in order to produce neural tract images instead of using this data solely for the purpose of assigning contrast or colors to pixels in a cross sectional image. It also provides useful structural information about muscle - including heart muscle, as well as other tissues such as the prostate. [4]

History

In 1990, Michael Moseley reported that water diffusion in white matter was anisotropic - the effect of diffusion on proton relaxation varied depending on the orientation of tracts relative to the orientation of the diffusion gradient applied by the imaging scanner. He also pointed out that this should best be described by a tensor.[5] Aaron Filler and colleagues reported in 1991 on the use of MRI for tract tracing in the brain using a contrast agent method but pointed out that Moseley's report on polarized water diffusion along nerves would affect the development of tract tracing.[6] A few months after submitting that report, in 1991, the first successful use of diffusion anisotropy data to carry out the tracing of neural tracts curving through the brain without contrast agents was accomplished.[7][1][8] Filler and colleagues identified both vector and tensor based methods in the patents, but the data for these initial images was obtained using the following sets of vector formulas:

(\text{vector length})^2 = BX^2 + BY^2 + BZ^2 \,

\text{diffusion vector angle between }BX\text{ and }BY = \arctan \frac{BY}{BX}

\text{diffusion vector angle between }BX\text{ and }BZ = \arctan \frac{BZ}{BX}

\text{diffusion vector angle between }BY\text{ and }BZ = \arctan \frac{BY}{BZ}

The first DTI image showing neural tracts curving through the brain in Macaca fascicularis (Filler et al. 1992)[8]

The use of mixed contributions from gradients in the three primary orthogonal axes in order to generate an infinite number of differently oriented gradients for tensor analysis was also identified in 1992 as the basis for accomplishing tensor descriptions of water diffusion in MRI voxels.[9][10][11] Both vector and tensor methods provide a "rotationally invariant" measurement - the magnitude will be the same no matter how the tract is oriented relative to the gradient axes - and both provide a three dimensional direction in space, however the tensor method is more efficient and accurate for carrying out tractography.[1]

The use of electromagnetic data acquisitions from six or more directions to construct a tensor ellipsoid was known from other fields at the time,[12] as was the use of the tensor ellipsoid to describe diffusion. [13] The invention of DTI therefore involved two aspects - 1) the application of known methods from other fields for the generation of MRI tensor data and 2) the usable introduction of a three dimensional selective neural tract "vector graphic" concept operating at a macroscopic level above the scale of the image voxel, in a field where two dimensional pixel imaging (bit mapped graphics) had been the only method used since MRI was originated.

Applications


The principal application is in the imaging of white matter where the location, orientation, and anisotropy of the tracts can be measured. The architecture of the axons in parallel bundles, and their myelin sheaths, facilitate the diffusion of the water molecules preferentially along their main direction. Such preferentially oriented diffusion is called anisotropic diffusion.
Tractographic reconstruction of neural connections via DTI

The imaging of this property is an extension of diffusion MRI. If a series of diffusion gradients (i.e. magnetic field variations in the MRI magnet) are applied that can determine at least 3 directional vectors (use of 6 different gradients is the minimum and additional gradients improve the accuracy for "off-diagonal" information), it is possible to calculate, for each voxel, a tensor (i.e. a symmetric positive definite 3 ×3 matrix) that describes the 3-dimensional shape of diffusion. The fiber direction is indicated by the tensor’s main eigenvector. This vector can be color-coded, yielding a cartography of the tracts' position and direction (red for left-right, blue for superior-inferior, and green for anterior-posterior). The brightness is weighted by the fractional anisotropy which is a scalar measure of the degree of anisotropy in a given voxel. Mean Diffusivity (MD) or Trace is a scalar measure of the total diffusion within a voxel. These measures are commonly used clinically to localize white matter lesions that do not show up on other forms of clinical MRI.

Diffusion tensor imaging data can be used to perform tractography within white matter. Fiber tracking algorithms can be used to track a fiber along its whole length (e.g. the corticospinal tract, through which the motor information transit from the motor cortex to the spinal cord and the peripheral nerves). Tractography is a useful tool for measuring deficits in white matter, such as in aging. Its estimation of fiber orientation and strength is increasingly accurate, and it has widespread potential implications in the fields of cognitive neuroscience and neurobiology.

Some clinical applications of DTI are in the tract-specific localization of white matter lesions such as trauma and in defining the severity of diffuse traumatic brain injury. The localization of tumors in relation to the white matter tracts (infiltration, deflection), has been one the most important initial applications. In surgical planning for some types of brain tumors, surgery is aided by knowing the proximity and relative position of the corticospinal tract and a tumor.

The use of DTI for the assessment of white matter in development, pathology and degeneration has been the focus of over 2,500 research publications since 2005. It promises to be very helpful in distinguishing Alzheimer's disease from other types of dementia. Applications in brain research cover e.g. connectionistic investigation of neural networks in vivo.[14]

DTI also has applications in the characterization of skeletal and cardiac muscle. The sensitivity to fiber orientation also appears to be helpful in the arena of sports medicine where it greatly aids imaging of structure and injury in muscles and tendons.

A recent study at Barnes-Jewish Hospital and Washington University School of Medicine of healthy persons and both newly affected and chronically-afflicted individuals with optic neuritis caused by multiple sclerosis (MS) showed that DTI can be used to assess the course of the condition's effects on the eye's optic nerve and the vision because it can assess axial diffusivity of water flow in the area.

VIDEO



NEXT UP

Magnetic Resonance Imaging

Sunday, June 21, 2009

Phage therapy

PROCEDURE OF THE DAY

Phage therapy

Phage therapy is the therapeutic use of bacteriophages to treat pathogenic bacterial infections. Although extensively used and developed mainly in former Soviet Union countries for about 90 years, this method of therapy is still being tested elsewhere for treatment of a variety of bacterial and poly-microbial biofilm infections, and has not yet been approved in countries other than Georgia. Phage therapy has many potential applications in human medicine as well as dentistry, veterinary science, and agriculture.[1] If the target host of a phage therapy treatment is not an animal, however, then the term "biocontrol" (as in phage-mediated biocontrol of bacteria) is sometimes employed rather than "phage therapy".

A hypothetical benefit of phage therapy is that bacteriophages can be much more specific than more common drugs, so they can be chosen to be indirectly harmless not only to the host organism (human, animal, or plant), but also to other beneficial bacteria, such as gut flora, reducing the chances of opportunistic infections. They also have a high therapeutic index, that is, phage therapy gives rise to few if any side effects, as opposed to drugs, and does not stress the liver. Because phages replicate in vivo, a smaller effective dose can be used. On the other hand, this specificity is also a disadvantage: A phage will only kill a bacterium if it is a match to the specific strain. Thus, phage mixtures are often applied to improve the chances of success, or samples can be taken and an appropriate phage identified and grown.

Phages are currently being used therapeutically to treat bacterial infections that do not respond to conventional antibiotics, particularly in the country of Georgia.[2][3][4] They tend to be more successful than antibiotics where there is a biofilm covered by a polysaccharide layer, which antibiotics typically cannot penetrate. [5] In the West, no therapies are currently authorized for use on humans, although phages for killing food poisoning bacteria (Listeria) are now in use.[6]

History

Following the discovery of bacteriophages by Frederick Twort and Felix d'Hérelle[7] in 1915 and 1917, phage therapy was immediately recognized by many to be a key way forward for the eradication of bacterial infections. A Georgian, George Eliava, was making similar discoveries. He travelled to the Pasteur Institute in Paris where he met d'Hérelle, and in 1926 he founded the Eliava Institute in Tbilisi, Georgia, devoted to the development of phage therapy.

In neighbouring countries including Russia, extensive research and development soon began in this field. In the USA during the 1940s, commercialization of phage therapy was undertaken by the large pharmaceutical company, Eli Lilly.

Whilst knowledge was being accumulated regarding the biology of phages and how to use phage cocktails correctly, early uses of phage therapy were often unreliable. When antibiotics were discovered in 1941 and marketed widely in the USA and Europe, Western scientists mostly lost interest in further use and study of phage therapy for some time.[8]

Isolated from Western advances in antibiotic production in the 1940s, Russian scientists continued to develop already successful phage therapy to treat the wounds of soldiers in field hospitals. During World War II, the Soviet Union used bacteriophages to treat many soldiers infected with various bacterial diseases e.g. dysentery and gangrene. The success rate was as good as, if not better than any antibiotic.[citation needed] Russian researchers continued to develop and to refine their treatments and to publish their research and results. However, due to the scientific barriers of the Cold War, this knowledge was not translated and did not proliferate across the world.[9][10]

There is an extensive library and research center at the Eliava Institute in Tbilisi, Georgia. Phage therapy is today a widespread form of treatment in that region. For 80 years Georgian doctors have been treating local people, including babies and newborns, with phages.

As a result of the development of antibiotic resistance since the 1950s and an advancement of scientific knowledge, there has been renewed interest worldwide in the ability of phage therapy to eradicate bacterial infections and chronic polymicrobial biofilm, along with other strategies.

Phages have been investigated as a potential means to eliminate pathogens like Campylobacter in raw food[11] and Listeria in fresh food or to reduce food spoilage bacteria.[12] In agricultural practice phages were used to fight pathogens like Campylobacter, Escherichia and Salmonella in farm animals, Lactococcus and Vibrio pathogens in fish from aquaculture and Erwinia and Xanthomonas in plants of agricultural importance. The oldest use was, however, in human medicine. Phages have been used against diarrheal diseases caused by E. coli, Shigella or Vibrio and against wound infections caused by facultative pathogens of the skin like staphylococci and streptococci. Recently the phage therapy approach has been applied to systemic and even intracellular infections and the addition of non-replicating phage and isolated phage enzymes like lysins to the antimicrobial arsenal. However, actual proof for the efficiency of these phage approaches in the field or the hospital is not available.[12]

Some of the interest in the West can be traced back to 1994, when Soothill demonstrated (in an animal model) that the use of phages could improve the success of skin grafts by reducing the underlying Pseudomonas aeruginosa infection.[13] Recent studies have provided additional support for these findings in the model system.[14]

Although not phage therapy in the original sense, the use of phages as delivery mechanisms for traditional antibiotics has been proposed.[15][16] The use of phages to deliver antitumor agents has also been described, in preliminary in vitro experiments for cells in tissue culture.[17].

Potential benefits


A hypothetical benefit of phage therapy is freedom from the adverse effects of antibiotics. Additionally, it is conceivable that, although bacteria rapidly develop resistance to phage, the resistance might be easier to overcome than resistance to antibiotics.

Bacteriophages are very specific, targeting only one or a few strains of bacteria.[18] Traditional antibiotics have more wide-ranging effect, killing both harmful bacteria and useful bacteria such as those facilitating food digestion. The specificity of bacteriophages might reduce the chance that useful bacteria are killed when fighting an infection.

Increasing evidence shows the ability of phages to travel to a required site — including the brain, where the blood brain barrier can be crossed — and multiply in the presence of an appropriate bacterial host, to combat infections such as meningitis. However the patient's immune system can, in some cases mount an immune response to the phage (2 out of 44 patients in a Polish trial[19]).

Development and production is faster than antibiotics, on condition that the required recognition molecules are known.[citation needed]

Research groups in the West are engineering a broader spectrum phage and also target MRSA treatments in a variety of forms - including impregnated wound dressings, preventative treatment for burn victims, phage-impregnated sutures.[20] Enzobiotics are a new development at Rockefeller University that create enzymes from phage. These show potential for preventing secondary bacterial infections e.g. pneumonia developing with patients suffering from flu, otitis etc..[citation needed]

Some bacteria such as multiply resistant Klebsiella pneumoniae have no non toxic antibiotics available, and yet killing of the bacteria via intraperitoneal, intravenous or intranasal of phages in vivo has been shown to work in laboratory tests.[21]

Risks of Phage Therapy

Phase therapy also has disadvantages:

Unlike antibiotics, phages must be refrigerated until used,[22] and a physician wishing to prescribe them needs special training in order to correctly prescribe and use phages.

Phages come in a great variety. That diversity becomes a disadvantage when the exact species of an infecting bacteria is unknown or if there is a multiple infection. For best results, the phages should be tested prior to application in the lab. For this reason, phages are less suitable for acute cases. Mixtures consisting of several phages can fight mixed infections.

Another con is that like viruses, bacteria can become resistant to treatments, and in this case they can mutate to survive the phage onslaught. Mutant bacteria can be destroyed by other types of phages, however. And phages are found throughout nature. This means that it is easy to find new phages when bacteria become resistant to them. Evolution drives the rapid emergence of new phages that can destroy bacteria that have become resistant. This means that there should be an ‘inexhaustible’ supply.

Phages that are injected into the bloodstream are recognized by the human immune system. Some of them are quickly excreted and, after a certain period, antibodies against the phages are produced by the body. For this reason, it appears that one type of phage can only be used once for intravenous treatment.[23]

Application


Collection


In its simplest form, phage treatment works by collecting local samples of water likely to contain high quantities of bacteria and bacteriophages, for example effluent outlets, sewage and other sources.[2] They can also be extracted from corpses. The samples are taken and applied to the bacteria that are to be destroyed which have been cultured on growth medium.

The bacteria usually die, and the mixture is centrifuged. The phages collect on the top of the mixture and can be drawn off.

The phage solutions are then tested to see which ones show growth suppression effects (lysogeny) and/or destruction (lysis) of the target bacteria. The phage showing lysis are then amplified on cultures of the target bacteria, passed through a filter to remove all but the phages, then distributed.

Treatment


Phages are "bacterium specific" and it is therefore necessary in many cases to take a swab from the patient and culture it prior to treatment. Occasionally, isolation of therapeutic phages can typically require a few months to complete, but clinics generally keep supplies of phage cocktails for the most common bacterial strains in a geographical area.

Phages in practice are applied orally, topically on infected wounds or spread onto surfaces, or used during surgical procedures. Injection is rarely used, avoiding any risks of trace chemical contaminants that may be present from the bacteria amplification stage,and recognizing that the immune system naturally fights against viruses introduced into the bloodstream or lymphatic system.

The direct human use of phage might possibly be safe; suggestively, in August 2006, the United States Food and Drug Administration approved spraying meat with phages. Although this initially raised concerns since without mandatory labeling consumers won't be aware that meat and poultry products have been treated with the spray,[24] it confirms to the public that, for example, phages against Listeria are generally recognized as safe (GRAS status) within the worldwide scientific community and opens the way for other phages to also be recognized as having GRAS status.

Phage therapy has been attempted for the treatment of a variety of bacterial infections including: laryngitis, skin infections, dysentery, conjunctivitis, periodontitis, gingivitis, sinusitis, urinary tract infections and intestinal infections, burns, boils, etc.[2] - also poly-microbial biofilms on chronic wounds, ulcers and infected surgical sites.[citation needed]

In 2007, Phase 2a clinical trials have been reported at the Royal National Throat, Nose and Ear Hospital, London for Pseudomonas aeruginosa infections (otitis).[25].[26][27] Documentation of the Phase-1 and Phase-2a study is not available as of 2009[update].

Phase 1 clinical trials are underway in the South West Regional Wound Care Center, Lubbock, Texas for an approved cocktail of phages against bacteria, including P. aeruginosa, Staphylococcus aureus and Escherichia coli (better known as E. coli).[citation needed]

Reviews of phage therapy indicate that more clinical and microbiological research is needed to meet current standards.[28]

Distribution

Phages can usually be freeze-dried and turned into pills without materially impacting efficiency.[2] In pill form temperature stability up to 55 C, and shelf lives of 14 months have been shown.[citation needed]

Application in liquid form is possible, stored preferably in refrigerated vials.[citation needed]

Oral administration works better when an antacid is included, as this increases the number of phages surviving passage through the stomach.[citation needed]

Topical administration often involves application to gauzes that are laid on the area to be treated.[citation needed]

Obstacles


General

The high bacterial strain specificity of phage therapy may make it necessary for clinics to make different cocktails for treatment of the same infection or disease because the bacterial components of such diseases may differ from region to region or even person to person.

In addition, due to the specificity of individual phages, for a high chance of success, a mixture of phages is often applied. This means that 'banks' containing many different phages are needed to be kept and regularly updated with new phages.

Further, bacteria can evolve different receptors either before or during treatment; this can prevent the phages from completely eradicating the bacteria.

The need for banks of phages makes regulatory testing for safety harder and more expensive. Such a process would make it difficult for large scale production of phage therapy. Additionally, patent issues (specifically on living organisms) may complicate distribution for pharmaceutical companies wishing to have exclusive rights over their "invention"; making it unlikely that a for-profit corporation will invest capital in the widespread application of this technology.

As it has been known for at least thirty years or more, mycobacterias such as Mycobacterium tuberculosis have specific bacteriophages.[29]. As for Clostridium difficile, which is responsible of many nosocomial diseases, no lytic phage has yet been discovered but some temperate phages (integrated in the genome) are nevertheless known for this species which opens encouraging avenues.

To work, the virus has to reach the site of the bacteria, and viruses do not necessarily reach the same places that antibiotics can reach.

Funding for phage therapy research and clinical trials is generally insufficient and difficult to obtain, since it is a lengthy and complex process to patent bacteriophage products. Scientists comment that 'the biggest hurdle is regulatory', whereas an official view is that individual phages would need proof individually because it would be too complicated to do as a combination, with many variables. Due to the specificity of phages, phage therapy would be most effective with a cocktail injection, which are generally rejected by the FDA. Researchers and observers predict that for phage therapy to be successful the FDA must change its regulatory stance on combination drug cocktails.[30] Public awareness and education about phage therapy are generally limited to scientific or independent research rather than mainstream media.[31]

The negative public perception of viruses may also play a role in the reluctance to embrace phage therapy.[32]

Safety

Phage therapy is generally considered safe. As with antibiotic therapy and other methods of countering bacterial infections, endotoxins are released by the bacteria as they are destroyed within the patient (Herxheimer reaction). This can cause symptoms of fever, or in extreme cases toxic shock (a problem also seen with antibiotics) is possible.[33] Janakiraman Ramachandran, a former president of AstraZeneca India who 2 years ago launched GangaGen Inc., a phage-therapy start-up in Bangalore,[34] argues that this complication can be avoided in those types of infection where this reaction is likely to occur by using genetically engineered bacteriophages; which have had their gene responsible for producing endolysin removed. Without this gene the host bacterium still dies but remains intact because the lysis is disabled. On the other hand this modification stops the exponential growth of phages, so one administered phage means one dead bacterial cell.[4] Eventually these dead cells are consumed by the normal house cleaning duties of the phagocytes, which utilise enzymes to break the whole bacterium and its contents down into its harmless sub-units of proteins, polysaccharides and lipids.[35]

Care has to be taken in manufacture that the phage medium is free of bacterial fragments and endotoxins from the production process.

Lysogenic bacteriophages are not generally used therapeutically. This group can act as a way for bacteria to exchange DNA, and this can help spread antibiotic resistance or even, theoretically, can make the bacteria pathogenic (see Cholera).

The lytic bacteriophages available for phage therapy are best kept refrigerated but discarded if the pale yellow clear liquid goes cloudy.

Effectiveness


In Russia, phage therapies produced by manufacturers have been shown to have approximately a 50% success rate at eradicating target bacteria.

VIDEO



NEXT UP

Diffusion MRI

Saturday, June 20, 2009

Electrotherapy

PROCEDURE OF THE DAY

Electrotherapy

Electrotherapy is the use of electrical energy as a medical treatment[1] In medicine, the term electrotherapy can apply to a variety of treatments, including the use of electrical devices such as deep brain stimulators for neurological disease. The term has also been applied specifically to the use of electrical current to speed wound healing. Additionally, the term "electrotherapy" or "electromagnetic therapy" has also been applied to a range of alternative medical devices and treatments.


History


In 1855 Guillaume Duchenne, the father of electrotherapy, announced that alternating was superior to direct current for electrotherapeutic triggering of muscle contractions.[2] What he called the 'warming affect' of direct currents irritated the skin, since, at voltage strengths needed for muscle contractions, they cause the skin to blister (at the anode) and pit (at the cathode). Furthermore, with DC each contraction requiring the current to be stopped and restarted. Moreover alternating current could produce strong muscle contractions regardless of the condition of the muscle, whereas DC-induced contractions were strong if the muscle was strong, and weak if the muscle was weak.

Since that time almost all rehabilitation involving muscle contraction has been done with a symmetrical rectangular biphasic waveform. In the 1940s, however, the US War Department, investigating the application of electrical stimulation not just to retard and prevent atrophy but to restore muscle mass and strength, employed what was termed galvanic exercise on the atrophied hands of patients who had an ulnar nerve lesion from surgery upon a wound.[3] These Galvanic exercises employed a monophasic wave form, direct current - electrochemistry.

Current use

Although a 1999 meta-analysis found that electrotherapy could speed the healing of wounds,[4] in 2000 the Dutch Medical Council found that although it was widely used, there was insufficient evidence for its benefits.[5] Since that time, a few publications have emerged that seem to support its efficacy, but data is still scarce. [6]

The use of electrotherapy has been widely researched and the advantages have been well accepted in the field of rehabilitation[7] (electrical muscle stimulation). The American Physical Therapy Association acknowledges the use of Electrotherapy for: [8] 1. Pain management Improve range of joint movement 2. Treatment of neuromuscular dysfunction Improvement of strength Improvement of motor control Retard muscle atrophy Improve local blood flow 3. Improve range of joint mobility Induce repeated stretching of contracted, shortened soft tissues 4. Tissue repair Enhance microcirculation and protein synthesis to heal wounds Restore integrity of connective and dermal tissues 5. Acute and chronic edema Accelerate absorption rate Affect blood vessel permeability Increase mobility of proteins, blood cells and lymphatic flow 6. Peripheral blood flow Induce arterial, venous and lymphatic flow 7. Iontophoresis Delivery of pharmacological agents 8. Urine and fecal incontinence Affect pelvic floor musculature to reduce pelvic pain and strengthen musculature Treatment may lead to complete continence

Electrotherapy is used for relaxation of muscle spasms, prevention and retardation of disuse atrophy, increase of local blood circulation, muscle rehabilitation and re-education electrical muscle stimulation, maintaining and increasing range of motion, management of chronic and intractable pain, post-traumatic acute pain, post surgical acute pain, immediate post-surgical stimulation of muscles to prevent venous thrombosis, wound healing and drug delivery.

Reputable medical and therapy Journals have published peer-reviewed research articles that attest to the medical properties of the various electro therapies. Yet some of the treatment effectiveness mechanisms are little understood. Therefore effectiveness and best practices for their use in some instances are still anecdotal.

Electrotherapy devices have been studied in the treatment of chronic wounds and pressure ulcers. A 1999 meta-analysis of published trials found some evidence that electrotherapy could speed the healing of such wounds, though it was unclear which devices were most effective and which types of wounds were most likely to benefit.[4] However, a more detailed review by the Cochrane Library found no evidence that electromagnetic therapy, a subset of electrotherapy, was effective in healing pressure ulcers[9] or venous stasis ulcers.

VIDEO


*None*

NEXT UP

Phage therapy

Friday, June 19, 2009

Immunization

PROCEDURE OF THE DAY

Immunization

Immunization, or immunisation, is the process by which an individual's immune system becomes fortified against an agent (known as the immunogen).

When an immune system is exposed to molecules that are foreign to the body (non-self), it will orchestrate an immune response, but it can also develop the ability to quickly respond to a subsequent encounter (through immunological memory). This is a function of the adaptive immune system. Therefore, by exposing an animal to an immunogen in a controlled way, their body can learn to protect itself: this is called active immunization.

The most important elements of the immune system that are improved by immunization are the B cells (and the antibodies they produce) and T cells. Memory B cell and memory T cells are responsible for a swift response to a second encounter with a foreign molecule. Passive immunization is when these elements are introduced directly into the body, instead of when the body itself has to make these elements.

Immunization can be done through various techniques, most commonly vaccination. Vaccines against microorganisms that cause diseases can prepare the body's immune system, thus helping to fight or prevent an infection. The fact that mutations can cause cancer cells to produce proteins or other molecules that are unknown to the body forms the theoretical basis for therapeutic cancer vaccines. Other molecules can be used for immunization as well, for example in experimental vaccines against nicotine (NicVAX) or the hormone ghrelin (in experiments to create an obesity vaccine).

Passive and active immunization

Immunization can be achieved in an active or passive fashion: vaccination is an active form of immunization.

Active immunization

Active immunization entails the introduction of a foreign molecule into the body, which causes the body itself to generate immunity against the target. This immunity comes from the T cells and the B cells with their antibodies.

Active immunization can occur naturally when a person comes in contact with, for example, a microbe. If the person has not yet come into contact with the microbe and has no pre-made antibodies for defense (like in passive immunization), the person becomes immunized. The immune system will eventually create antibodies and other defenses against the microbe. The next time, the immune response against this microbe can be very efficient; this is the case in many of the childhood infections that a person only contracts once, but then is immune.

Artificial active immunization is where the microbe, or parts of it, are injected into the person before they are able to take it in naturally. If whole microbes are used, they are pre-treated. Depending on the type of disease, this technique also works with dead microbes, parts of the microbe, or treated toxins from the microbe.

Passive immunization

Passive immunization is where pre-synthesized elements of the immune system are transferred to a person so that the body does not need to produce these elements itself. Currently, antibodies can be used for passive immunization. This method of immunization begins to work very quickly, but it is short lasting, because the antibodies are naturally broken down, and if there are no B cells to produce more antibodies, they will disappear.

Passive immunization occurs physiologically, when antibodies are transferred from mother to fetus during pregnancy, to protect the fetus before and shortly after birth.

Artificial passive immunization is normally administered by injection and is used if there has been a recent outbreak of a particular disease or as an emergency treatment for toxicity (for example, for tetanus). The antibodies can be produced in animals ("serum therapy") although there is a high chance of anaphylactic shock because of immunity against animal serum itself. Thus, humanized antibodies produced in vitro by cell culture are used instead if available.

VIDEO



NEXT UP


Electrotherapy

Thursday, June 18, 2009

Chemotherapy

PROCEDURE OF THE DAY

Chemotherapy

Chemotherapy
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A woman being treated with docetaxel chemotherapy for breast cancer. Cold mittens and wine coolers are placed on her hands and feet to prevent deleterious effects on the nails. Similar strategies can be used to prevent hair loss.

Chemotherapy, in its most general sense, refers to treatment of disease by chemicals[1] that kill cells, both good and bad, but specifically those of micro-organisms or cancer. In popular usage, it refers to antineoplastic drugs used to treat cancer or the combination of these drugs into a cytotoxic standardized treatment regimen. In its non-oncological use, the term may also refer to antibiotics (antibacterial chemotherapy). In that sense, the first modern chemotherapeutic agent was Paul Ehrlich's arsphenamine, an arsenic compound discovered in 1909 and used to treat syphilis. This was later followed by sulfonamides discovered by Domagk and penicillin discovered by Alexander Fleming.

Most commonly, chemotherapy acts by killing cells that divide rapidly, one of the main properties of cancer cells. This means that it also harms cells that divide rapidly under normal circumstances: cells in the bone marrow, digestive tract and hair follicles; this results in the most common side-effects of chemotherapy–myelosuppression (decreased production of blood cells), mucositis (inflammation of the lining of the digestive tract) and alopecia (hair loss).

Other uses of cytostatic chemotherapy agents (including the ones mentioned below) are the treatment of autoimmune diseases such as multiple sclerosis and rheumatoid arthritis and the suppression of transplant rejections (see immunosuppression and DMARDs). Newer anticancer drugs act directly against abnormal proteins in cancer cells; this is termed targeted therapy.


History


The usage of chemical substances and drugs as medication can be traced back to the ancient Indian system of medicine called Ayurveda, which uses many metals besides herbs for treatment of a large number of ailments. More recently, Persian physician, Muhammad ibn Zakarīya Rāzi (Rhazes), in the 10th century, introduced the use of chemicals such as vitriol, copper, mercuric and arsenic salts, sal ammoniac, gold scoria, chalk, clay, coral, pearl, tar, bitumen and alcohol for medical purposes.[2]

The first drug used for cancer chemotherapy, however, dates back to the early 20th century, though it was not originally intended for that purpose. Mustard gas was used as a chemical warfare agent during World War I and was studied further during World War II. During a military operation in World War II, a group of people were accidentally exposed to mustard gas and were later found to have very low white blood cell counts[3]. It was reasoned that an agent that damaged the rapidly-growing white blood cells might have a similar effect on cancer. Therefore, in the 1940s, several patients with advanced lymphomas (cancers of certain white blood cells) were given the drug by vein, rather than by breathing the irritating gas. Their improvement, although temporary, was remarkable.[4] [5] That experience led researchers to look for other substances that might have similar effects against cancer. As a result, many other drugs have been developed to treat cancer, and drug development since then has exploded into a multibillion-dollar industry. The targeted-therapy revolution has arrived, but the principles and limitations of chemotherapy discovered by the early researchers still apply. [6]

Principles

Cancer is the uncontrolled growth of cells coupled with malignant behavior: invasion and metastasis. Cancer is thought to be caused by the interaction between genetic susceptibility and environmental toxins.

In the broad sense, most chemotherapeutic drugs work by impairing mitosis (cell division), effectively targeting fast-dividing cells. As these drugs cause damage to cells they are termed cytotoxic. Some drugs cause cells to undergo apoptosis (so-called "programmed cell death").

Scientists have yet to identify specific features of malignant and immune cells that would make them uniquely targetable (barring some recent examples, such as the Philadelphia chromosome as targeted by imatinib). This means that other fast-dividing cells, such as those responsible for hair growth and for replacement of the intestinal epithelium (lining), are also often affected. However, some drugs have a better side-effect profile than others, enabling doctors to adjust treatment regimens to the advantage of patients in certain situations.

As chemotherapy affects cell division, tumors with high growth fractions (such as acute myelogenous leukemia and the aggressive lymphomas, including Hodgkin's disease) are more sensitive to chemotherapy, as a larger proportion of the targeted cells are undergoing cell division at any time. Malignancies with slower growth rates, such as indolent lymphomas, tend to respond to chemotherapy much more modestly.

Drugs affect "younger" tumors (i.e., more differentiated) more effectively, because mechanisms regulating cell growth are usually still preserved. With succeeding generations of tumor cells, differentiation is typically lost, growth becomes less regulated, and tumors become less responsive to most chemotherapeutic agents. Near the center of some solid tumors, cell division has effectively ceased, making them insensitive to chemotherapy. Another problem with solid tumors is the fact that the chemotherapeutic agent often does not reach the core of the tumor. Solutions to this problem include radiation therapy (both brachytherapy and teletherapy) and surgery.

Over time, cancer cells become more resistant to chemotherapy treatments. Recently, scientists have identified small pumps on the surface of cancer cells that actively move chemotherapy from inside the cell to the outside. Research on p-glycoprotein and other such chemotherapy efflux pumps, is currently ongoing. Medications to inhibit the function of p-glycoprotein are undergoing testing as of June, 2007 to enhance the efficacy of chemotherapy.

Treatment schemes


There are a number of strategies in the administration of chemotherapeutic drugs used today. Chemotherapy may be given with a curative intent or it may aim to prolong life or to palliate symptoms.

Combined modality chemotherapy is the use of drugs with other cancer treatments, such as radiation therapy or surgery. Most cancers are now treated in this way. Combination chemotherapy is a similar practice that involves treating a patient with a number of different drugs simultaneously. The drugs differ in their mechanism and side-effects. The biggest advantage is minimising the chances of resistance developing to any one agent.

In neoadjuvant chemotherapy (preoperative treatment) initial chemotherapy is designed to shrink the primary tumour, thereby rendering local therapy (surgery or radiotherapy) less destructive or more effective.

Adjuvant chemotherapy (postoperative treatment) can be used when there is little evidence of cancer present, but there is risk of recurrence. This can help reduce chances of developing resistance if the tumour does develop. It is also useful in killing any cancerous cells which have spread to other parts of the body. This is often effective as the newly growing tumours are fast-dividing, and therefore very susceptible.

Palliative chemotherapy is given without curative intent, but simply to decrease tumor load and increase life expectancy. For these regimens, a better toxicity profile is generally expected.

All chemotherapy regimens require that the patient be capable of undergoing the treatment. Performance status is often used as a measure to determine whether a patient can receive chemotherapy, or whether dose reduction is required. Because only a fraction of the cells in a tumor die with each treatment (fractional kill), repeated doses must be administered to continue to reduce the size of the tumor [7]. Current chemotherapy regimens apply drug treatment in cycles, with the frequency and duration of treatments limited by toxicity to the patient [8].

Types

The majority of chemotherapeutic drugs can be divided in to alkylating agents, antimetabolites, anthracyclines, plant alkaloids, topoisomerase inhibitors, and other antitumour agents. All of these drugs affect cell division or DNA synthesis and function in some way.

Some newer agents do not directly interfere with DNA. These include monoclonal antibodies and the new tyrosine kinase inhibitors e.g. imatinib mesylate (Gleevec or Glivec), which directly targets a molecular abnormality in certain types of cancer (chronic myelogenous leukemia, gastrointestinal stromal tumors). These are examples of targeted therapies.

In addition, some drugs that modulate tumor cell behaviour without directly attacking those cells may be used. Hormone treatments fall into this category of adjuvant therapies.

Where available, Anatomical Therapeutic Chemical Classification System codes are provided for the major categories.

Alkylating agents (L01A)


Alkylating agents are so named because of their ability to add alkyl groups to many electronegative groups under conditions present in cells. Cisplatin and carboplatin, as well as oxaliplatin, are alkylating agents.

Other agents are mechlorethamine, cyclophosphamide, chlorambucil. They work by chemically modifying a cell's DNA.

Anti-metabolites (L01B)


Anti-metabolites masquerade as purine ((azathioprine, mercaptopurine)) or pyrimidine - which become the building blocks of DNA. They prevent these substances from becoming incorporated in to DNA during the "S" phase (of the cell cycle), stopping normal development and division. They also affect RNA synthesis. Due to their efficiency, these drugs are the most widely used cytostatics.

Plant alkaloids and terpenoids (L01C)


These alkaloids are derived from plants and block cell division by preventing microtubule function. Microtubules are vital for cell division, and, without them, cell division cannot occur. The main examples are vinca alkaloids and taxanes.

Vinca alkaloids (L01CA)


Vinca alkaloids bind to specific sites on tubulin, inhibiting the assembly of tubulin into microtubules (M phase of the cell cycle). They are derived from the Madagascar periwinkle, Catharanthus roseus (formerly known as Vinca rosea). The vinca alkaloids include:

* Vincristine
* Vinblastine
* Vinorelbine
* Vindesine

Podophyllotoxin (L01CB)

Podophyllotoxin is a plant-derived compound which is said to help with digestion as well as used to produce two other cytostatic drugs, etoposide and teniposide. They prevent the cell from entering the G1 phase (the start of DNA replication) and the replication of DNA (the S phase). The exact mechanism of its action is not yet known.

The substance has been primarily obtained from the American Mayapple (Podophyllum peltatum). Recently it has been discovered that a rare Himalayan Mayapple (Podophyllum hexandrum) contains it in a much greater quantity, but, as the plant is endangered, its supply is limited. Studies have been conducted to isolate the genes involved in the substance's production, so that it could be obtained recombinantively.

Taxanes (L01CD)

The prototype taxane is the natural product paclitaxel, originally known as Taxol and first derived from the bark of the Pacific Yew tree. Docetaxel is a semi-synthetic analogue of paclitaxel. Taxanes enhance stability of microtubules, preventing the separation of chromosomes during anaphase.

Topoisomerase inhibitors (L01CB and L01XX)

Topoisomerases are essential enzymes that maintain the topology of DNA. Inhibition of type I or type II topoisomerases interferes with both transcription and replication of DNA by upsetting proper DNA supercoiling.

* Some type I topoisomerase inhibitors include camptothecins: irinotecan and topotecan.

* Examples of type II inhibitors include amsacrine, etoposide, etoposide phosphate, and teniposide. These are semisynthetic derivatives of epipodophyllotoxins, alkaloids naturally occurring in the root of American Mayapple (Podophyllum peltatum).

Antitumour antibiotics (L01D)


These include the immunosuppressant dactinomycin (which is used in kidney transplantations), doxorubicin, epirubicin, bleomycin and others.

Newer and experimental approaches

Hematopoietic stem cell transplant approaches

Stem cell harvesting and autologous or hematopoietic stem cell transplantation has been used to allow for higher doses of chemotheraputic agents where dosages are primarily limited by hematopoietic damage. Years of research in treating solid tumors, particularly breast cancer, with hematopoeitic stem cell transplants, has yielded little proof of efficacy. Hematological malignancies such as myeloma, lymphoma, and leukemia remain the main indications for stem cell transplants.

Isolated infusion approaches


Isolated limb perfusion (often used in melanoma), or isolated infusion of chemotherapy into the liver or the lung have been used to treat some tumours. The main purpose of these approaches is to deliver a very high dose of chemotherapy to tumor sites without causing overwhelming systemic damage. These approaches can help control solitary or limited metastases, but they are by definition not systemic, and, therefore, do not treat distributed metastases or micrometastases.

Targeted delivery mechanisms


Specially-targeted delivery vehicles aim to increase effective levels of chemotherapy for tumor cells while reducing effective levels for other cells. This should result in an increased tumor kill and/or reduced toxicity.

Specially-targeted delivery vehicles have a differentially higher affinity for tumor cells by interacting with tumor-specific or tumour-associated antigens.

In addition to their targeting component, they also carry a payload - whether this is a traditional chemotherapeutic agent, or a radioisotope or an immune stimulating factor. Specially-targeted delivery vehicles vary in their stability, selectivity, and choice of target, but, in essence, they all aim to increase the maximum effective dose that can be delivered to the tumor cells. Reduced systemic toxicity means that they can also be used in sicker patients, and that they can carry new chemotherapeutic agents that would have been far too toxic to deliver via traditional systemic approaches.

Nanoparticles


Nanoparticles have emerged as a useful vehicle for poorly-soluble agents such as paclitaxel. Protein-bound paclitaxel (e.g., Abraxane) or nab-paclitaxel was approved by the U.S. Food and Drug Administration (FDA) in January 2005 for the treatment of refractory breast cancer, and allows reduced use of the Cremophor vehicle usually found in paclitaxel. Nanoparticles made of magnetic material can also be used to concentrate agents at tumour sites using an externally applied magnetic field.

Dosage


Dosage of chemotherapy can be difficult: If the dose is too low, it will be ineffective against the tumor, whereas, at excessive doses, the toxicity (side-effects, neutropenia) will be intolerable to the patient. This has led to the formation of detailed "dosing schemes" in most hospitals, which give guidance on the correct dose and adjustment in case of toxicity. In immunotherapy, they are in principle used in smaller dosages than in the treatment of malignant diseases.

In most cases, the dose is adjusted for the patient's body surface area, a measure that correlates with blood volume. The BSA is usually calculated with a mathematical formula or a nomogram, using a patient's weight and height, rather than by direct measurement.

Delivery

Most chemotherapy is delivered intravenously, although a number of agents can be administered orally (e.g., melphalan, busulfan, capecitabine). In some cases, isolated limb perfusion (often used in melanoma), or isolated infusion of chemotherapy into the liver or the lung have been used. The main purpose of these approaches is to deliver a very high dose of chemotherapy to tumour sites without causing overwhelming systemic damage.

Depending on the patient, the cancer, the stage of cancer, the type of chemotherapy, and the dosage, intravenous chemotherapy may be given on either an inpatient or an outpatient basis. For continuous, frequent or prolonged intravenous chemotherapy administration, various systems may be surgically inserted into the vasculature to maintain access. Commonly-used systems are the Hickman line, the Port-a-Cath or the PICC line. These have a lower infection risk, are much less prone to phlebitis or extravasation, and abolish the need for repeated insertion of peripheral cannulae.

Harmful and lethal toxicity from chemotherapy limits the dosage of chemotherapy that can be given. Some tumours can be destroyed by sufficiently high doses of chemotheraputic agents. However, these high doses cannot be given because they would be fatal to the patient.

Side-effects

The treatment can be physically exhausting for the patient. Current chemotherapeutic techniques have a range of side effects mainly affecting the fast-dividing cells of the body. Important common side-effects include (dependent on the agent):

* Pain
* Nausea and vomiting
* Diarrhea or constipation
* Anemia
* Malnutrition
* Hair loss
* Memory loss
* Depression of the immune system, hence (potentially lethal) infections and sepsis
* Psychosocial distress
* Weight loss or gain
* Hemorrhage
* Secondary neoplasms
* Cardiotoxicity
* Hepatotoxicity
* Nephrotoxicity
* Ototoxicity


Secondary Neoplasm


The development of secondary neoplasia after successful chemotherapy and or radiotherapy treatment has shown to exist. The most common secondary neoplasm is secondary acute myeloid leukemia, which develops primarily after treatment with alkylating agents or topoisomerase inhibitors.[9] Other studies have shown a 13.5 fold increase from the general population in the incidence of secondary neoplasm occurrence after 30 years from treatment.[10]

Immunosuppression and myelosuppression

Virtually all chemotherapeutic regimens can cause depression of the immune system, often by paralysing the bone marrow and leading to a decrease of white blood cells, red blood cells, and platelets. The latter two, when they occur, are improved with blood transfusion. Neutropenia (a decrease of the neutrophil granulocyte count below 0.5 x 109/litre) can be improved with synthetic G-CSF (granulocyte-colony stimulating factor, e.g., filgrastim, lenograstim, Neupogen, Neulasta).

In very severe myelosuppression, which occurs in some regimens, almost all the bone marrow stem cells (cells that produce white and red blood cells) are destroyed, meaning allogenic or autologous bone marrow cell transplants are necessary. (In autologous BMTs, cells are removed from the patient before the treatment, multiplied and then re-injected afterwards; in allogenic BMTs the source is a donor.) However, some patients still develop diseases because of this interference with bone marrow.

Nausea and vomiting


Nausea and vomiting caused by chemotherapy; stomach upset may trigger a strong urge to vomit, or forcefully eliminate what is in the stomach.

Stimulation of the vomiting center results in the coordination of responses from the diaphragm, salivary glands, cranial nerves, and gastrointestinal muscles to produce the interruption of respiration and forced expulsion of stomach contents known as retching and vomiting. The vomiting center is stimulated directly by afferent input from the vagal and splanchnic nerves, the pharynx, the cerebral cortex, cholinergic and histamine stimulation from the vestibular system, and efferent input from the chemoreceptor trigger zone (CTZ). The CTZ is in the area postrema, outside the blood-brain barrier, and is thus susceptible to stimulation by substances present in the blood or cerebral spinal fluid. The neurotransmitters dopamine and serotonin stimulate the vomiting center indirectly via stimulation of the CTZ.

The 5-HT3 inhibitors are the most effective antiemetics and constitute the single greatest advance in the management of nausea and vomiting in patients with cancer. These drugs are designed to block one or more of the signals that cause nausea and vomiting. The most sensitive signal during the first 24 hours after chemotherapy appears to be 5-HT3. Blocking the 5-HT3 signal is one approach to preventing acute emesis (vomiting), or emesis that is severe, but relatively short-lived. Approved 5-HT3 inhibitors include Dolasetron (Anzemet), Granisetron (Kytril, Sancuso), and Ondansetron (Zofran). The newest 5-HT3 inhibitor, palonosetron (Aloxi), also prevents delayed nausea and vomiting, which occurs during the 2-5 days after treatment. A granisetron transdermal patch (Sancuso) was approved by the FDA in September 2008. The patch is applied 24-48 hours before chemotherapy and can be worn for up to 7 days depending on the duration of the chemotherapy regimen.

Another drug to control nausea in cancer patients became available in 2005. The substance P inhibitor aprepitant (marketed as Emend) has been shown to be effective in controlling the nausea of cancer chemotherapy. The results of two large controlled trials were published in 2005, describing the efficacy of this medication in over 1,000 patients.[11]

Some studies[12] and patient groups claim that the use of cannabinoids derived from marijuana during chemotherapy greatly reduces the associated nausea and vomiting, and enables the patient to eat. Some synthetic derivatives of the active substance in marijuana (Tetrahydrocannabinol or THC) such as Marinol may be practical for this application. Natural marijuana, known as medical cannabis is also used and recommended by some oncologists, though its use is regulated and not legal everywhere.[13]

Other side-effects

In particularly large tumors, such as large lymphomas, some patients develop tumor lysis syndrome from the rapid breakdown of malignant cells. Although prophylaxis is available and is often initiated in patients with large tumors, this is a dangerous side-effect that can lead to death if left untreated.

Some patients report fatigue or non-specific neurocognitive problems, such as an inability to concentrate; this is sometimes called post-chemotherapy cognitive impairment, referred to as "chemo brain" by patients' groups.[14]

Specific chemotherapeutic agents are associated with organ-specific toxicities, including cardiovascular disease (e.g., doxorubicin), interstitial lung disease (e.g., bleomycin) and occasionally secondary neoplasm (e.g., MOPP therapy for Hodgkin's disease).

VIDEO



NEXT UP

Immunization

Wednesday, June 17, 2009

Chest Photofluorography

PROCEDURE OF THE DAY

Chest Photofluorography

Chest photofluorography, or abreugraphy (also called mass miniature radiography) is a photofluorography technique for mass screening for tuberculosis using a miniature (50 to 100 mm) photograph of the screen of a x-ray fluoroscopy of the thorax, first developed in 1935.


History


Abreugraphy receives its name from its inventor, Dr. Manuel Dias de Abreu, a Brazilian physician and pulmonologist. It has received several different names, according to the country where it was adopted: mass radiography, miniature chest radiograph (United Kingdom and USA), roentgenfluorography (Germany), radiophotography (France), schermografia (Italy), photoradioscopy (Spain) and photofluorography (Sweden).

In many countries, miniature mass radiographs (MMR) was quickly adopted and extensively utilized in the 1950s. For example, in Brazil and in Japan, tuberculosis prevention laws went into effect, obligating ca. 60% of the population to undergo MMR screening. However, as a mass screening program for low-risk populations, the procedure was largely discontinued in the 1970s, following recommendation of the World Health Organization, due to three main reasons:

1. The dramatic decrease of the general incidence of tuberculosis in developed countries (from 150 cases per 100,000 inhabitants in 1900, 70/100,000 in 1940 and 5/100,000 in 1950);
2. Decreased benefits/cost ratio (a recent Canadian study [1] has shown a cost of CD$ 236,496 per case in groups of immigrants with a low risk for tuberculosis, versus CD$ 3,943 per case in high risk groups);
3. Risk of exposure to ionizing radiation doses, particularly among children, in the presence of extremely low yield rates of detection.

Current use

MMR is still an easy and useful way to prevent transmission of the disease in certain situations, such as in prisons and for immigration applicants and foreign workers coming from countries with a higher risk for tuberculosis. Currently, 13 of the 26 European countries use MMR as the primary screening tool for this purpose. Examples of countries with permanent programs are Italy, Switzerland, Norway, Netherlands, Japan and the United Kingdom.

For example, a study in Switzerland [2] between 1988 and 1990, employing abreugraphy to detect tuberculosis in 50,784 immigrants entering the canton of Vaud, discovered 674 foreign people with abnormalities. Of these, 256 had tuberculosis as the primary diagnosis and 34 were smear or culture-positive (5% of all radiological abnormalities).

Elderly populations are also a good target for MMR-based screening, because the radiation risk is less important and because they have a higher risk of tuberculosis (85 per 100,000 in developed countries, in the average). In Japan, for example, it is still used routinely, and the Japan Anti-Tuberculosis Association (JATA) reported the detection of 228 cases in 965,440 chest radiographs in 1996 alone [3].

MMR is most useful at detecting tuberculosis infection in the asymptomatic phase, and it should be combined with tuberculin skin tests and clinical questioning in order to be more effective. The sharp increase in tuberculosis in all countries with large exposure to HIV is probably mandating a return of MMR as a screening tool focusing on high-risk populations, such as homosexuals and intravenous drug users. New advances in digital radiography, coupled with much lower x-ray dosages may herald better MMR technologies.

VIDEO


*None*

NEXT UP


Chemotherapy

Tuesday, June 16, 2009

Fluoroscopy

PROCEDURE OF THE DAY

Fluoroscopy

Fluoroscopy is an imaging technique commonly used by physicians to obtain real-time moving images of the internal structures of a patient through the use of a fluoroscope. In its simplest form, a fluoroscope consists of an x-ray source and fluorescent screen between which a patient is placed. However, modern fluoroscopes couple the screen to an x-ray image intensifier and CCD video camera allowing the images to be recorded and played on a monitor.

The use of x-rays, a form of ionizing radiation, requires that the potential risks from a procedure be carefully balanced with the benefits of the procedure to the patient. While physicians always try to use low dose rates during fluoroscopic procedures, the length of a typical procedure often results in a relatively high absorbed dose to the patient. Recent advances include the digitization of the images captured and flat-panel detector systems which reduce the radiation dose to the patient still further.

History

The beginning of fluoroscopy can be traced back to 8 November 1895 when Wilhelm Röntgen noticed a barium platinocyanide screen fluorescing as a result of being exposed to what he would later call x-rays. Within months of this discovery, the first fluoroscopes were created. Early fluoroscopes were simply cardboard funnels, open at narrow end for the eyes of the observer, while the wide end was closed with a thin cardboard piece that had been coated on the inside with a layer of fluorescent metal salt. The fluoroscopic image obtained in this way is rather faint. Thomas Edison quickly discovered that calcium tungstate screens produced brighter images and is credited with designing and producing the first commercially available fluoroscope. In its infancy, many incorrectly predicted that the moving images from fluoroscopy would completely replace the still x-ray radiographs, but the superior diagnostic quality of the earlier radiographs prevented this from occurring.

Ignorance of the harmful effects of x-rays resulted in the absence of standard radiation safety procedures which are employed today. Scientists and physicians would often place their hands directly in the x-ray beam resulting in radiation burns. Trivial uses for the technology also resulted, including the shoe-fitting fluoroscope used by shoe stores in the 1930s-1950s.[1]

Due to the limited light produced from the fluorescent screens, early radiologists were required to sit in a darkened room, in which the procedure was to be performed, accustomizing their eyes to the dark and thereby increasing their sensitivity to the light. The placement of the radiologist behind the screen resulted in significant radiation doses to the radiologist. Red adaptation goggles were developed by Wilhelm Trendelenburg in 1916 to address the problem of dark adaptation of the eyes, previously studied by Antoine Beclere. The resulting red light from the goggles' filtration correctly sensitized the physician's eyes prior to the procedure while still allowing him to receive enough light to function normally.

The development of the X-ray image intensifier and the television camera in the 1950s revolutionized fluoroscopy. The red adaptation goggles became obsolete as image intensifiers allowed the light produced by the fluorescent screen to be amplified, allowing it to be seen even in a lighted room. The addition of the camera enabled viewing of the image on a monitor, allowing a radiologist to view the images in a separate room away from the risk of radiation exposure.

More modern improvements in screen phosphors, image intensifiers and even flat panel detectors have allowed for increased image quality while minimizing the radiation dose to the patient. Modern fluoroscopes use CsI screens and produce noise-limited images, ensuring that the minimal radiation dose results while still obtaining images of acceptable quality.

Risks


Because fluoroscopy involves the use of x-rays, a form of ionizing radiation, all fluoroscopic procedures pose a potential health risk to the patient. Radiation doses to the patient depend greatly on the size of the patient as well as length of the procedure, with typical skin dose rates quoted as 20-50 mGy/min. Exposure times vary depending on the procedure being performed, but procedure times up to 75 minutes have been documented. Because of the long length of some procedures, in addition to standard cancer-inducing stochastic radiation effects, deterministic radiation effects have also been observed ranging from mild erythema, equivalent of a sun burn, to more serious burns.

A study has been performed by the Food and Drug Administration (FDA) entitled Radiation-induced Skin Injuries from Fluoroscopy[2] with an additional publication to minimize further fluoroscopy-induced injuries, Public Health Advisory on Avoidance of Serious X-Ray-Induced skin Injuries to Patients During Fluoroscopically-Guided Procedures[3].

While deterministic radiation effects are a possibility, radiation burns are not typical of standard fluoroscopic procedures. Most procedures sufficiently long in duration to produce radiation burns are part of necessary life-saving operations.

Fluoroscopy Equipment

The first fluoroscopes consisted of an x-ray source and fluorescent screen between which the patient would be placed. As the x rays pass through the patient, they are attenuated by varying amounts as they interact with the different internal structures of the body, casting a shadow of the structures on the fluorescent screen. Images on the screen are produced as the unattenuated x rays interact with atoms in the screen through the photoelectric effect, giving their energy to the electrons. While much of the energy given to the electrons is dissipated as heat, a fraction of it is given off as visible light, producing the images. Early radiologists would adapt their eyes to view the dim fluoroscopic images by sitting in darkened rooms, or by wearing red adaptation goggles.

X-ray Image Intensifiers

The invention of X-ray image intensifiers in the 1950s allowed the image on the screen to be visible under normal lighting conditions, as well as providing the option of recording the images with a conventional camera. Subsequent improvements included the coupling of, at first, video cameras and, later, CCD cameras to permit recording of moving images and electronic storage of still images.

Modern image intensifiers no longer use a separate fluorescent screen. Instead, a caesium iodide phosphor is deposited directly on the photocathode of the intensifier tube. On a typical general purpose system, the output image is approximately 105 times brighter than the input image. This brightness gain comprises a flux gain (amplification of photon number) and minification gain (concentration of photons from a large input screen onto a small output screen) each of approximately 100. This level of gain is sufficient that quantum noise, due to the limited number of x-ray photons, is a significant factor limiting image quality.

Image intensifiers are available with input diameters of up to 45 cm, and a resolution of approximately 2-3 line pairs mm-1.

Flat-panel detectors

The introduction of flat-panel detectors allows for the replacement of the image intensifier in fluoroscope design. Flat panel detectors offer increased sensitivity to X-rays, and therefore have the potential to reduce patient radiation dose. Temporal resolution is also improved over image intensifiers, reducing motion blurring. Contrast ratio is also improved over image intensifiers: flat-panel detectors are linear over a very wide latitude, whereas image intensifiers have a maximum contrast ratio of about 35:1. Spatial resolution is approximately equal, although an image intensifier operating in 'magnification' mode may be slightly better than a flat panel.

Flat panel detectors are considerably more expensive to purchase and repair than image intensifiers, so their uptake is primarily in specialties that require high-speed imaging, e.g., vascular imaging and cardiac catheterization.

Imaging concerns


In addition to spatial blurring factors that plague all x-ray imaging devices, caused by such things as Lubberts effect, K-fluorescence reabsorption and electron range, fluoroscopic systems also experience temporal blurring due to system lag. This temporal blurring has the effect of averaging frames together. While this helps reduce noise in images with stationary objects, it creates motion blurring for moving objects. Temporal blurring also complicates measurements of system performance for fluoroscopic systems.

Common procedures using fluoroscopy


* Investigations of the gastrointestinal tract, including barium enemas, barium meals and barium swallows, and enteroclysis.
* Orthopaedic surgery to guide fracture reduction and the placement of metalwork.
* Angiography of the leg, heart and cerebral vessels.
* Placement of a PICC (peripherally inserted central catheter)
* Placement of a weighted feeding tube (e.g. Dobhoff) into the duodenum after previous attempts without fluoroscopy have failed.
* Urological surgery – particularly in retrograde pyelography.
* Implantation of cardiac rhythm management devices (pacemakers, implantable cardioverter defibrillators and cardiac resynchronization devices)

Another common procedure is the modified barium swallow study during which barium-impregnated liquids and solids are ingested by the patient. A radiologist records and, with a speech pathologist, interprets the resulting images to diagnose oral and pharyngeal swallowing dysfunction. Modified barium swallow studies are also used in studying normal swallow function.

VIDEO




NEXT UP

Chest Photofluorography

Monday, June 15, 2009

Plasmapheresis

PROCEDURE OF THE DAY

Plasmapheresis

Plasmapheresis (from the Greek plasma, something molded, and apheresis, taking away) is the removal, treatment, and return of (components of) blood plasma from blood circulation. It is thus an extracorporeal therapy. The method can also be used to collect plasma for further manufacturing into a variety of medications.

As therapy

During plasmapheresis, blood is initially taken out of the body through a needle or previously implanted catheter. Plasma is then removed from the blood by a cell separator. Three procedures are commonly used to separate the plasma from the blood cells:

* Discontinuous flow centrifugation: One venous catheter mine is required. Typically, a 300 ml batch of blood is removed at a time and centrifuged to separate plasma from blood cells.
* Continuous flow centrifugation: Two venous lines are used. This method requires slightly less blood volume to be out of the body at any one time as it is able to continuously spin out plasma.
* Plasma filtration: Two venous lines are used. The plasma is filtered using standard hemodialysis equipment. This continuous process requires less than 100 ml of blood to be outside the body at one time.

Each method has its advantages and disadvantages. After plasma separation, the blood cells are returned to the person undergoing treatment, while the plasma, which contains the antibodies, is first treated and then returned to the patient in traditional plasmapheresis. (In plasma exchange, the removed plasma is discarded and the patient receives replacement donor plasma, albumin or saline with added proteins.) Medication to keep the blood from clotting (an anticoagulant) is generally given to the patient during the procedure. Plasmapheresis is used as a therapy in particular diseases. It is an uncommon treatment in the United States, but it is more common in Europe and particularly Japan.[citation needed]

An important use of plasmapheresis is in the therapy of autoimmune disorders, where the rapid removal of disease-causing autoantibodies from the circulation is required in addition to slower medical therapy. It is important to note that plasma exchange therapy in and of itself is useful to temper the disease process, where simultaneous medical and immunosuppressive therapy is required for long term management. Plasma exchange offers the quickest short-term answer to removing harmful autoantibodies; however, the production of autoantibodies by the immune system must also be stopped, usually by the use of medications that suppress the immune system, such as prednisone, cyclophosphamide, cyclosporine, mycophenilate mofetil, rituximab. or a mixture of these.

Other uses are the removal of blood proteins where these are overly abundant and cause hyperviscosity syndrome.

Examples of diseases that can be treated with plasmapheresis:

* Guillain-Barré syndrome
* Chronic inflammatory demyelinating polyneuropathy
* Goodpasture's syndrome
* Hyperviscosity syndromes:
o Cryoglobulinemia
o Paraproteinemia
o Waldenström macroglobulinemia
* Myasthenia gravis
* Thrombotic thrombocytopenic purpura (TTP)/Hemolytic Uremic Syndrome
* Wegener's granulomatosis
* Lambert-Eaton Syndrome
* Antiphospholipid-Antibody Syndrome (APL)
* Microscopic polyangiitis
* Recurrent focal and segmental glomerulosclerosis in the transplanted kidney
* HELLP syndrome
* Refsum disease
* Behcet syndrome
* HIV-related neuropathy [1]
* Graves' disease in infants and neonates
* Pemphigus vulgaris
* Multiple Sclerosis

Complications of plasmapheresis therapy


Though plasmapheresis is helpful in certain medical conditions, like any other therapy, there are potential risks and complications. Insertion of a rather large intravenous catheter can lead to bleeding, lung puncture (depending on the site of catheter insertion), and, if the catheter is left in too long, it can get infected.

Aside from placing the catheter, the procedure itself has complications. When blood is outside of the body passing through the plasmapheresis filter blood has a tendency to clot. To reduce this tendency citrate is infused while the blood is running through the circuit. Citrate binds to calcium in the blood, calcium being essential for blood to clot. Citrate is very effective in preventing blood from clotting; however, its use can lead to life-threatening low calcium levels. This can be detected using the Chvostek's sign or Trousseau's sign. To prevent this complication, calcium is infused intravenously while the patient is undergoing the plasmapheresis; in addition, calcium supplementation by mouth may also be given.

Other complications include:

* Potential exposure to blood products, with risk of transfusion reactions or transfusion transmitted diseases
* Suppression of the patient's immune system
* Bleeding or hematoma from needle placement

As a manufacturing process

BioLife Plasma Services, one of several chains of plasma donation centers in the United States.

Plasma donation is in many ways similar to whole blood donation, though the end product is used for different purposes. Most plasmapheresis is for fractionation into other products, other blood donations are transfused with relatively minor modifications. Plasma that is collected solely for further manufacturing is called Source Plasma.

Plasma donors undergo a screening process to ensure both the donor's safety and the safety of the collected product. Factors monitored include blood pressure, pulse, temperature, total protein, protein electrophoresis, health history screening similar to that for whole blood, as well as an annual physical exam with a licensed physician or an approved physician substitute under the supervision of the physician. Donors are screened at each donation for viral diseases that can be transmitted by blood, sometimes by multiple methods. For example, donors are tested for HIV by EIA, which will show if they have ever been exposed to the disease, as well as by nucleic acid methods (PCR or similar) to rule out recent infections that might be missed by the EIA test. Industry standards require at least two sets of negative test results before the collected plasma is used for injectable products. The plasma is also treated in processing multiple times to inactivate any virus that was undetected during the screening process.

Plasma donors are typically paid cash for their donations, though this is not universal. For example, donors in New Zealand are not given financial incentives. Since the products are heavily processed and treated to remove infectious agents, the higher risk is considered acceptable. Standards for plasma donation are set by national regulatory agencies such as the FDA[2], the European Union, and by a professional organization, the Plasma Protein Therapeutics Association or PPTA[1], which audits and accredits collection facilities. A National Donor Deferral Registry (NDDR) is also maintained by the PPTA for use in keeping donors with prior positive test results from donating at any facility.

Almost all plasmapheresis in the US is performed by automated methods such as the Plasma Collection System (PCS2) made by Haemonetics or the Autopheresis-C (Auto-C) made by Fenwal, a division of Baxter International. In some cases, automated plasmapheresis is used to collect plasma products like Fresh frozen plasma for direct transfusion purposes, often at the same time as plateletpheresis.

Manual method

For the manual method, approximately the same as a whole blood donation is collected from the donor. The collected blood is then separated by centrifuge machines in separate rooms, the plasma is pressed out of the collection set into a satellite container, and the red blood cells are returned to the donor. Since returning red cells causes the plasma to be replaced more rapidly by the body, a donor can provide up to a liter of plasma at a time and can donate with only a few days between donations, unlike the 56-day deferral for blood donation. The amount allowed in a donation varies vastly from country to country, but generally does not exceed two donations, each as much as a liter, per 7-day period.

The danger with this method was that if the wrong red blood cells were returned to the donor, a serious and potentially fatal transfusion reaction could occur. Requiring donors to recite their names and ID numbers on returned bags of red cells minimized this risk. This procedure has largely become obsolete in favor of the automated method.

Automated method

The automated method uses a very similar process. The difference is that the collection, separation, and return are all performed inside a machine which is connected to the donor through a needle placed in the arm, typically the antecubital vein. There is no risk of receiving the wrong red cells.[3] The devices used are very similar to the devices used for therapeutic plasmapheresis, and the potential for citrate toxicity is similar. The potential risks are explained to prospective donors at the first donation, and most donors tolerate the procedure well.

If a significant amount of red blood cells cannot be returned, the donor may not donate for 56 days, just as if they had donated a unit of blood. Depending on the collection system and the operation, the removed plasma may be replaced by saline. The body will typically replace the collected volume within 24 hours, and donors typically donate up to twice a week, though this varies by country.

The collected plasma is promptly frozen at lower than -20 °C (-4 °F) and is typically shipped to a processing facility for fractionation. This process separates the collected plasma into specific components, such as albumin and immunoglobulins, most of which are made into medications for human use. Sometimes the plasma is thawed and transfused as Fresh Frozen Plasma (FFP), much like the plasma from a normal blood donation.

Donors are sometimes immunized against agents such as Tetanus or Hepatitis B so that their plasma contains the antibodies against the toxin or disease. In other donors, an intentionally incompatible unit of blood is transfused to produce antibodies to the antigens on the red cells. The collected plasma then contains these components, which are used in manufacturing of medications. Donors who are already ill may have their plasma collected for use as a positive control for laboratory testing.

VIDEO



NEXT UP

Fluoroscopy

Sunday, June 14, 2009

Blood Transfusion

PROCEDURE OF THE DAY

Blood Transfusion

Blood transfusion is the process of transferring blood or blood-based products from one person into the circulatory system of another. Blood transfusions can be life-saving in some situations, such as massive blood loss due to trauma, or can be used to replace blood lost during surgery. Blood transfusions may also be used to treat a severe anaemia or thrombocytopenia caused by a blood disease. People suffering from hemophilia or sickle-cell disease may require frequent blood transfusions. Early transfusions used Whole Blood, but modern medical practice is to use only components of the blood.


History


Early attempts


The first historical attempt at blood transfusion was described by the 15th-century chronicler Stefano Infessura. Infessura relates that, in 1492, as Pope Innocent VIII sank into a coma, the blood of three boys was infused into the dying pontiff (through the mouth, as the concept of circulation and methods for intravenous access did not exist at that time) at the suggestion of a physician. The boys were ten years old, and had been promised a ducat each. However, not only did the pope die, but so did the three children. Some authors have discredited Infessura's account, accusing him of anti-papalism.[1]
World War II syringe for direct interhuman blood transfusion

With Harvey's re-discovery of the circulation of the blood (which was discovered by Ibn al-Nafis in the 13th century), more sophisticated research into blood transfusion began in the 17th century, with successful experiments in transfusion between animals. However, successive attempts on humans continued to have fatal results.

The first fully-documented human blood transfusion was administered by Dr. Jean-Baptiste Denys, eminent physician to King Louis XIV of France, on June 15, 1667. He transfused the blood of a sheep into a 15-year old boy, who recovered. Denys performed another transfusion into a labourer, who also survived. Both instances were likely due to the small amount of blood that was actually transfused into these people. This allowed them to withstand the allergic reaction. In the winter of 1667, Denys performed several transfusions on Antoine Mauroy with calf's blood, who on the third account had died[2]. Much controversy surrounded his death and his wife was accused for causing his death. Though it was later determined that Mauroy actually died from arsenic poisoning, Denys' experiments with animal blood provoked a heated controversy in France. Finally, in 1670 the procedure was banned. In time, the British Parliament and even the pope followed suit. Blood transfusions fell into obscurity for the next 150 years.

First successful transfusion

Cornishman Richard Lower examined the effects of changes in blood volume on circulatory function and developed methods for cross-circulatory study in animals, obviating clotting by closed arteriovenous connections. His newly devised instruments eventually led to actual transfusion of blood.

"Many of his colleagues were present. . . towards the end of February 1665 [when he] selected one dog of medium size, opened its jugular vein, and drew off blood, until . . . its strength was nearly gone . . . Then, to make up for the great loss of this dog by the blood of a second, I introduced blood from the cervical artery of a fairly large mastiff, which had been fastened alongside the first, until this latter animal showed . . . it was overfilled . . . by the inflowing blood." After he "sewed up the jugular veins," the animal recovered "with no sign of discomfort or of displeasure."

Lower had performed the first blood transfusion between animals. He was then "requested by the Honorable [Robert] Boyle . . . to acquaint the Royal Society with the procedure for the whole experiment," which he did in December of 1665 in the Society’s Philosophical Transactions. On 15 June 1667 Denys, then a professor in Paris, carried out the first transfusion between humans and claimed credit for the technique, but Lower’s priority cannot be challenged.

Six months later in London, Lower performed the first human transfusion in Britain, where he "superintended the introduction in his [a patient’s] arm at various times of some ounces of sheep’s blood at a meeting of the Royal Society, and without any inconvenience to him." The recipient was Arthur Coga, "the subject of a harmless form of insanity." Sheep’s blood was used because of speculation about the value of blood exchange between species; it had been suggested that blood from a gentle lamb might quiet the tempestuous spirit of an agitated person and that the shy might be made outgoing by blood from more sociable creatures. Lower wanted to treat Coga several times, but his patient wisely refused. No more transfusions were performed. Shortly before, Lower had moved to London, where his growing practice soon led him to abandon research. [1]

The first successes


The science of blood transfusion dates to the first decade of the 19th century, with the discovery of distinct blood types leading to the practice of mixing some blood from the donor and the receiver before the transfusion (an early form of cross-matching).

In 1818, Dr. James Blundell, a British obstetrician, performed the first successful blood transfusion of human blood, for the treatment of postpartum hemorrhage. He used the patient's husband as a donor, and extracted four ounces of blood from his arm to transfuse into his wife. During the years 1825 and 1830, Dr. Blundell performed 10 transfusions, five of which were beneficial, and published his results. He also invented many instruments for the transfusion of blood. He made a substantial amount of money from this endeavour, roughly $50 million (about $2 million in 1827) real dollars (adjusted for inflation).[citation needed]

In 1840, at St George's Hospital Medical School in London, Samuel Armstrong Lane, aided by Dr. Blundell, performed the first successful whole blood transfusion to treat hemophilia.

George Washington Crile is credited with performing the first surgery using a direct blood transfusion at the Cleveland Clinic.

Many patients had died and it was not until 1901, when the Austrian Karl Landsteiner discovered human blood groups, that blood transfusions became safer. Mixing blood from two individuals can lead to blood clumping or agglutination. The clumped red cells can crack and cause toxic reactions. This can have fatal consequences. Karl Landsteiner discovered that blood clumping was an immunological reaction which occurs when the receiver of a blood transfusion has antibodies (A, B, both A & B, or neither) against the donor blood cells. Karl Landsteiner's work made it possible to determine blood groups (A, B, AB, O) and thus paved the way for blood transfusions to be carried out safely. For this discovery he was awarded the Nobel Prize in Physiology or Medicine in 1930.

Development of blood banking


While the first transfusions had to be made directly from donor to receiver before coagulation, in the 1910s it was discovered that by adding anticoagulant and refrigerating the blood it was possible to store it for some days, thus opening the way for blood banks. The first non-direct transfusion was performed on March 27, 1914 by the Belgian doctor Albert Hustin, who used sodium citrate as an anticoagulant. The first blood transfusion using blood that had been stored and cooled was performed on January 1, 1916. Oswald Hope Robertson, a medical researcher and U.S. Army officer, is generally credited with establishing the first blood bank while serving in France during World War I.

The first academic institution devoted to the science of blood transfusion was founded by Alexander Bogdanov in Moscow in 1925. Bogdanov was motivated, at least in part, by a search for eternal youth, and remarked with satisfaction on the improvement of his eyesight, suspension of balding, and other positive symptoms after receiving 11 transfusions of whole blood.

In fact, following the death of Vladimir Lenin, Bogdanov was entrusted with the study of Lenin's brain, with a view toward resuscitating the deceased Bolshevik leader. Tragically, but perhaps not unforeseeably, Bogdanov lost his life in 1928 as a result of one of his experiments, when the blood of a student suffering from malaria and tuberculosis was given to him in a transfusion. Some scholars (e.g. Loren Graham) have speculated that his death may have been a suicide, while others attribute it to blood type incompatibility, which was still incompletely understood at the time.[3]

The modern era


Following Bogdanov's lead, the Soviet Union set up a national system of blood banks in the 1930s. News of the Soviet experience traveled to America, where in 1937 Bernard Fantus, director of therapeutics at the Cook County Hospital in Chicago, established the first hospital blood bank in the United States. In creating a hospital laboratory that preserved and stored donor blood, Fantus originated the term "blood bank". Within a few years, hospital and community blood banks were established across the United States.

In the late 1930s and early 1940s, Dr. Charles R. Drew's research led to the discovery that blood could be separated into blood plasma and red blood cells, and that the plasma could be frozen separately. Blood stored in this way lasted longer and was less likely to become contaminated.

Another important breakthrough came in 1939-40 when Karl Landsteiner, Alex Wiener, Philip Levine, and R.E. Stetson discovered the Rhesus blood group system, which was found to be the cause of the majority of transfusion reactions up to that time. Three years later, the introduction by J.F. Loutit and Patrick L. Mollison of acid-citrate-dextrose (ACD) solution, which reduces the volume of anticoagulant, permitted transfusions of greater volumes of blood and allowed longer term storage.

Carl Walter and W.P. Murphy, Jr., introduced the plastic bag for blood collection in 1950. Replacing breakable glass bottles with durable plastic bags allowed for the evolution of a collection system capable of safe and easy preparation of multiple blood components from a single unit of whole blood. Further extending the shelf life of stored blood was an anticoagulant preservative, CPDA-1, introduced in 1979, which increased the blood supply and facilitated resource-sharing among blood banks.

As of 2006, there were about 15 million units of blood transfused per year in the United States.[4]

Precautions


Compatibility


Great care is taken in cross-matching to ensure that the recipient's immune system will not attack the donor blood. In addition to the familiar human blood types (A, B, AB and O) and Rh factor (positive or negative) classifications, other minor red cell antigens are known to play a role in compatibility. These other types can become increasingly important in people who receive many blood transfusions, as their bodies develop increasing resistance to blood from other people via a process of alloimmunization.

The key importance of the Rh group is its role in Hemolytic disease of the fetus and newborn. When an Rh negative mother carries a positive fetus, she can become immunized against the Rh antigen. This usually is not important during that pregnancy, but in the following pregnancies she can develop an immune response to the Rh antigen. The mother's immune system can attack the baby's red cells through the placenta. Mild cases of HDFN can lead to disability but some severe cases are fatal. Rh-D is the most commonly involved red cell antigen in HDFN, but other red cell antigens can also cause the condition. The "positive" or "negative" in heard blood types such as "O positive" is the Rh-D antigen.

HDN prevention started in the 1960s when it was noted children of pregnant women who had received anti Rh immunoglobulin did not develop the disease. From then on, Rh negative pregnant women receive immunoglobulin doses at several moments during pregnancy and after childbirth if the baby is Rh positive. In current practice, Rh negative women of fertile age will not receive a transfusion of Rh positive blood except in desperate situations when nothing else is available.

Transfusion Transmitted Infections


A number of infectious diseases (such as HIV, syphilis, hepatitis B and hepatitis C, among others) can be passed from the donor to recipient. This has led to strict human blood transfusion standards in developed countries. Standards include screening for potential risk factors and health problems among donors and laboratory testing of donated units for infection.

Among the diseases than can be transmitted via transfusion are:

* HIV-1 and HIV-2
* Human T-lymphotropic virus (HTLV-1 and HTLV-2)
* Hepatitis C virus
* Hepatitis B virus
* West Nile virus All units of blood in the U. S. are screened for this virus.
* Treponema pallidum (the causative agent of syphilis, usually used as more of a screening test for high risk lifestyle, the last case of transfusion transmitted syphilis was in 1965.)
* Malaria - Donors in the United States and Europe are screened for travel to malarial risk countries, and in Australia donors are tested for malaria.
* Chagas Disease - A screening test has been implemented for this disease in the United States, but is not yet required.
* variant Creutzfeldt-Jakob Disease or "Mad Cow Disease" has been shown to be transmissible in blood products. No test exists for this, but various measures have been taken to reduce risks.
* Some medications may be transmitted in donated blood, and this is especially a concern with pregnant women and medications such as Avodart and Propecia.
* Cytomegalovirus or CMV is a major problem for patients with compromised immune systems and for neonates, but is not generally a concern for most recipients.

As of mid-2005, all donated blood in the United States is screened for HIV, Hepatitis B and C, HTLV-1 and 2, West Nile Virus, and Treponema pallidum.[5][6] Blood which tests positive for any of the diseases it is tested for is discarded.

When a person's need for a transfusion can be anticipated, as in the case of scheduled surgery, autologous donation can be used to protect against disease transmission and eliminate the problem of blood type compatibility. "Directed" donations from donors known to the recipient were a common practice during the initial years of HIV. These kinds of donations are still common in developing countries.

Processing of blood prior to transfusion


Donated blood is usually subjected to processing after it is collected, to make it suitable for use in specific patient populations. Examples include:

* Component separation: red cells, plasma and platelets are separated into different containers and stored in appropriate conditions so that their use can be adapted to the patient's specific needs. Red cells work as oxygen transporters, plasma is used as a supplement of coagulation factors, and platelets are transfused when their number is very scarce or their function severely impaired. Blood components are usually prepared by centrifugation. Centrifuge force makes the red cells, leukocytes, plasma and platelets form different layers into the blood bag, according to their different densities. Then, the blood bag is processed to separate those layers into their final container. Temperature also plays a capital role into component storage: plasma must be frozen as soon as possible −18 °C (−0.4 °F) or colder, red cells must be refrigerated (1-6°C, 34-43°F) and platelets are kept in continuous shaking platforms at room temperature (20-24°C, 36-75°F). There are several component preparation techniques, but two methods are common for Whole Blood derived platelets: the platelet rich plasma separation technique (mostly used in the USA) and the buffy coat technique (outside the USA).
* Leukoreduction, also known as Leukodepletion is the removal of white blood cells from the blood product by filtration. Leukoreduced blood is less likely to cause alloimmunization (development of antibodies against specific blood types), and less likely to cause febrile transfusion reactions. Also, leukoreduction greatly reduces the chance of cytomegalovirus (CMV) transmission. Leukoreduced blood is appropriate for:[7]
o Chronically transfused patients
o Potential transplant recipients
o Patients with previous febrile nonhemolytic transfusion reactions
o CMV seronegative at-risk patients for whom seronegative components are not available

Some blood banks routinely leukoreduce all collected blood. There is some evidence that this reduces the risk of CJD transmission.

* Irradiation. In patients who are severely immunosuppressed and at risk for transfusion-associated graft-versus-host disease, transfused red cells may be subjected to irradiation with a targeted dose of 25 Gy, at least 15 Gy, to prevent the donor T lymphocytes from dividing in the recipient.[8] Irradiated blood products are appropriate for:
o Patients with hereditary immune deficiencies
o Patients receiving blood transfusions from relatives in directed-donation programs
o Patients receiving large doses of chemotherapy, undergoing stem cell transplantation, or with AIDS (controversial).
* CMV screening. Cytomegalovirus, or CMV, is a virus which infects white blood cells. Many people are asymptomatic carriers. In patients with significant immune suppression (e.g. recipients of stem cell transplants) who have not previously been exposed to CMV, blood products that are CMV-negative are preferred. Leukoreduced blood products sometimes substitute for CMV-negative products, since the removal of white blood cells removes the source of CMV transmission (see leukoreduction above). The target for leukoreduction is <5x10^6 residual leukocytes for a full unit of Red Blood Cells, and the same amount of unfiltered blood has on the order of 10^9 leukocytes[9], so this reduces but does not elimate the risk.

Neonatal transfusion

To ensure the safety of blood transfusion to pediatric patients, hospitals are taking additional precaution to avoid infection and prefer to use specially tested pediatric blood units that are guaranteed negative for Cytomegalovirus. Most guidelines recommend the provision of CMV-negative blood components and not simply leukoreduced components for newborns or low birthweight infants in whom the immune system is not fully developed.[10] These specific requirements place additional restrictions on blood donors who can donate for neonatal use. Neonatal transfusions are usually top-up transfusions, exchange transfusions, partial exchange transfusions. Top-up transfusions are for investigational losses and correction of mild degrees of anemias, up to 5-15 ml/kg. Exchange transfusions are done for correction of anemia, removal of bilirubin, removal of antibodies and replacement of red cells. Ideally plasma-reduced red cells that are not older than 5 days are used.[11]

Terminology


The terms type and screen are used for the testing that (1) determines the blood group (ABO compatibility) and (2) screens for alloantibodies.[12] It takes about 45 minutes to complete (depending on the method used). The blood bank technologist also checks for special requirements of the patient (eg. need for washed, irradiated or CMV negative blood) and the history of the patient to see if they have a previously identified antibody.

A positive screen warrants an antibody panel/investigation. An antibody panel consists of commercially prepared group O red cell suspensions from donors that have been phenotyped for commonly encountered and clinically significant alloantibodies. Donor cells may have homozygous (e.g. K+k-), heterozygous (K+k+) expression or no expression of various antigens (K-k+). The phenotypes of all the donor cells being tested are shown in a chart. The patient's serum is tested against the various donor cells using an enhancement method, eg Gel or LISS. Based on the reactions of the patient's serum against the donor cells, a pattern will emerge to confirm the presence of one or more antibodies. Not all antibodies are clinically significant (i.e. cause transfusion reactions, HDN, etc). Once the patient has developed a clinically significant antibody it is vital that the patient receive antigen negative phenotyped red blood cells to prevent future transfusion reactions. A direct antiglobulin test (DAT) is also performed as part of the antibody investigation.[13]

Once the type and screen has been completed, potential donor units will be selected based on compatibility with the patient's blood group, special requirements (eg CMV negative, irradiated or washed) and antigen negative (in the case of an antibody). If there is no antibody present or suspected, the immediate spin or CAC (computer assisted crossmatch) method may be used.

In the immediate spin method, two drops of patient serum are tested against a drop of 3-5% suspension of donor cells in a test tube and spun in a serofuge. Agglutination or hemolysis in the test tube is a positive reaction and the unit should not be transfused.

If an antibody is suspected, potential donor units must first be screened for the corresponding antigen by phenotyping them. Antigen negative units are then tested against the patient plasma using an antiglobulin/indirect crossmatch technique at 37 degrees Celsius to enhance reactivity and make the test easier to read.

If there is no time the blood is called "uncross-matched blood". Uncross-matched blood is O-positive or O-negative. O-negative is usually used for children and women of childbearing age. It is preferable for the laboratory to obtain a pre-transfusion sample in these cases so a type and screen can be performed to determine the actual blood group of the patient and to check for alloantibodies.

Procedure


Blood transfusions can be grouped into two main types depending on their source:

* Homologous transfusions, or transfusions using the stored blood of others. These are often called Allogeneic instead of homologous.
* Autologous transfusions, or transfusions using the patient's own stored blood.

Donor units of blood must be kept refrigerated to prevent bacterial growth and to slow cellular metabolism. The transfusion must begin within 30 minutes after the unit has been taken out of controlled storage.

Blood can only be administered intravenously. It therefore requires the insertion of a cannula of suitable caliber.

Before the blood is administered, the personal details of the patient are matched with the blood to be transfused, to minimize risk of transfusion reactions. Clerical error is a significant source of transfusion reactions and attempts have been made to build redundancy into the matching process that takes place at the bedside.

A unit (up to 500 ml) is typically administered over 4 hours. In patients at risk of congestive heart failure, many doctors administer a diuretic to prevent fluid overload, a condition called Transfusion Associated Circulatory Overload or TACO. Acetaminophen and/or an antihistamine such as diphenhydramine are sometimes given before the transfusion to prevent other types of transfusion reactions.

Blood donation

Blood is most commonly donated as whole blood by inserting a catheter into a vein and collecting it in a plastic bag (mixed with anticoagulant) via gravity. Collected blood is then separated into components to make the best use of it. Aside from red blood cells, plasma, and platelets, the resulting blood component products also include albumin protein, clotting factor concentrates, cryoprecipitate, fibrinogen concentrate, and immunoglobulins (antibodies). Red cells, plasma and platelets can also be donated individually via a more complex process called apheresis.

In developed countries, donations are usually anonymous to the recipient, but products in a blood bank are always individually traceable through the whole cycle of donation, testing, separation into components, storage, and administration to the recipient. This enables management and investigation of any suspected transfusion related disease transmission or transfusion reaction. In developing countries the donor is sometimes specifically recruited by or for the recipient, typically a family member, and the donation immediately before the transfusion.

Risks to the recipient


There are risks associated with receiving a blood transfusion, and these must be balanced against the benefit which is expected. The most common adverse reaction to a blood transfusion is a febrile non-hemolytic transfusion reaction, which consists of a fever which resolves on its own and causes no lasting problems or side effects.

Hemolytic reactions include chills, headache, backache, dyspnea, cyanosis, chest pain, tachycardia and hypotension.

Blood products can rarely be contaminated with bacteria; the risk of severe bacterial infection and sepsis is estimated, as of 2002, at about 1 in 50,000 platelet transfusions, and 1 in 500,000 red blood cell transfusions.[14]

There is a risk that a given blood transfusion will transmit a viral infection to its recipient. As of 2006, the risk of acquiring hepatitis B via blood transfusion in the United States is about 1 in 250,000 units transfused, and the risk of acquiring HIV or hepatitis C in the U.S. via a blood transfusion is estimated at 1 per 2 million units transfused.[citation needed] These risks were much higher in the past before the advent of second and third generation tests for transfusion transmitted diseases. The implementation of Nucleic Acid Testing or "NAT" in the early 00's has further reduced risks, and confirmed viral infections by blood transfusion are extremely rare in the developed world.

Transfusion-associated acute lung injury (TRALI) is an increasingly recognized adverse event associated with blood transfusion. TRALI is a syndrome of acute respiratory distress, often associated with fever, non-cardiogenic pulmonary edema, and hypotension, which may occur as often as 1 in 2000 transfusions.[15] Symptoms can range from mild to life-threatening, but most patients recover fully within 96 hours, and the mortality rate from this condition is less than 10%.[16]. Although the cause of TRALI is not clear, it has been consistently associated with anti HLA antibodies. Because anti HLA strongly correlate with pregnancy, several transfusion organisations (Blood and Tissues Bank of Cantabria, Spain, National Health Service in Britain) have decided to use only plasma from men for transfusion.

Other risks associated with receiving a blood transfusion include volume overload, iron overload (with multiple red blood cell transfusions), transfusion-associated graft-vs.-host disease, anaphylactic reactions (in people with IgA deficiency), and acute hemolytic reactions (most commonly due to the administration of mismatched blood types).

Transformation from one type to another


Scientists working at the University of Copenhagen reported in the journal Nature Biotechnology in April 2007 of discovering enzymes, which potentially enable blood from groups A, B and AB to be converted into group O. These enzymes do not affect the Rh group of the blood.[17][18]

Objections to blood transfusion


Objections to blood transfusions may arise for personal, medical, or religious reasons. For example, Jehovah's Witnesses object to blood transfusion primarily on religious grounds - they believe that blood is sacred; although they have also highlighted possible complications associated with transfusion.

Animal blood transfusion


Veterinarians also administer transfusions to animals. Various species require different levels of testing to ensure a compatible match. For example, cats have 3 known blood types, cattle have 11, dogs have 12, pigs 16 and horses have 34. However, in many species (especially horses and dogs), cross matching is not required before the first transfusion, as antibodies against non-self cell surface antigens are not expressed constitutively - i.e. the animal has to be sensitized before it will mount an immune response against the transfused blood.

The rare and experimental practice of inter-species blood transfusions is a form of xenograft.

Blood transfusion substitutes

As of 2008, there are no widely utilized oxygen-carrying blood substitutes for humans; however, there are widely available non-blood volume expanders and other blood-saving techniques. These are helping doctors and surgeons avoid the risks of disease transmission and immune suppression, address the chronic blood donor shortage, and address the concerns of Jehovah's Witnesses and others who have religious objections to receiving transfused blood.

A number of blood substitutes are currently in the clinical evaluation stage. Most attempts to find a suitable alternative to blood thus far have concentrated on cell-free hemoglobin solutions. Blood substitutes could make transfusions more readily available in emergency medicine and in pre-hospital EMS care. If successful, such a blood substitute could save many lives, particularly in trauma where massive blood loss results. Hemopure, a hemoglobin-based therapy, is approved for use in South Africa.

VIDEO




NEXT UP


Plasmapheresis