Home
Articles in this document - Metabolic Syndrome - Gout and Hyperuricemia - Lipid Guidelines - Arthritis
Gout
The Diagnosis and Treatment of Gout
Current Treatment Recommendations for Acute and Chronic Gout
Gout: Treatment Reaction to Allopurinol -- What Next?
Urate-Lowering Therapy for Gout: Focus on Febuxostat
Update on Gout and Hyperuricemia
New Therapeutic Targets for Hyperuricaemia and Gout
Management of Gout Often Requires Multiple Medications
Uric Acid and Evolution
Uric Acid in Heart Disease - A New C-reactive Protein?
White Papules on the Ear: Discussion of Answer
Uric Acid: Role in Cardiovascular Disease and Effects of Losartan
Recent Developments in Diet and Gout
A Prescription for Lifestyle Change in Patients with Hyperuricemia and Gout
Gout Drug May Lower Blood Pressure
Postmenopausal Hormone Therapy May Modestly Reduce Gout Risk
Health Implications of Fructose Consumption and the Risk of Gout in Humans
Fructose -- How Worried Should We Be?
Fructose Intake Associated With an Increased Risk for Gout
Soft Drinks, Fructose Consumption, and the Risk of Gout in Men [Sugar & Gout]
New Lipid Guidelines Recommend Tighter Control: Management of Hypercholesterolemia
New Guidelines for Managing Hypercholesterolemia
Hypertriglyceridemia
Therapeutic Lifestyle Changes and Pharmaceutical Care in the Treatment of Dyslipidemias in Adults
Prevalence of Obesity Among Adults With Arthritis: Editorial Note
What Epidemiology Has Told Us About Risk Factors and Aetiopathogenesis in Rheumatic Diseases
Protein, Iron, and Meat Consumption and Risk for Rheumatoid Arthritis
Metabolic Syndrome: Connecting and Reconciling Cardiovascular and Diabetes Worlds
Identification and Management of Metabolic Syndrome: The Role of the APN
Lifestyle Treatment of the Metabolic Syndrome
Eating Fish May Reduce the Risk for Subclinical Brain Abnormalities
Vitamin D Supplementation: An Update
High-Protein Diets
What Should We Eat? Evidence from Observational Studies: Conclusion
Exploring the Link between Blood Pressure and Lifestyle
High Plasma Urate Strongly Linked to Reduced PD [Parkinsons] Risk
Urolithiasis/Nephrolithiasis: What's It All About?
Chronic Kidney Disease: CT or MRI?
Recent Advances in the Pathophysiology of Nephrolithiasis:
Balancing Diuretic Therapy in Heart Failure: Loop Diuretics, Thiazides, and Aldosterone Antagonists
WWW Search Medscape
Main -
Arthritis Resource Center Features -
Rheumatology Recently Posted -
Current Opinion in Rheumatology -
Gout Features -
Gout News -
Gout Diet -
Hyperuricemia -
Nursing Perspectives -
Urolithiasis Nephrolithiasis -
Heart Failure -
Anticoagulation -
Vitamin D -
Blood Pressure -
Parkinsons -
Metabolic Syndrome -
Dyslipidemias -
Cardiovascular -
Diabetes -
-
-
Gouty Arthritis Treatment Complicating Anticoagulation
http://search.medscape.com/news-search?
Top
The Diagnosis and Treatment of Gout
Robert G. Smith, DPM, MSc, RPh, CPed
Posted: 07/23/2009; US Pharmacist. 2009;34(5):40-47. © 2009 Jobson Publishing
Clinical literature has recently reported that gout is the most common inflammatory arthritis in the United States, with 3 to 5 million sufferers.[1,2] Both the incidence and the prevalence of gout appear to be increasing worldwide.[3] Gout is perhaps the oldest known type of arthritis; it has been colorfully depicted in art and literature along with commentaries on the moral character of the gout sufferer(Figure 1). Literature accounts have referred to gout;s association with rich foods and excessive alcohol consumption—thus the description, "the disease of kings."
Gout is a monosodium urate, monohydrate crystal deposit disease with a very rich history mirroring the evolution of medicine itself.[4,5] It was among the earliest diseases to be recognized as a clinical entity. Since gout has been recognized for so many centuries, its diagnosis and treatment generally have not elicited much interest; thus, the management of gout is a challenge for the clinician caring for the patient with this disease.[6]
Recent medical literature recognizes that most patients with gout visit a primary care physician for disease management, but there are challenges to diagnosing and treating gout in this setting.[7] Further, Weaver et al stated that the arrival of newer investigational agents in the market has prompted rheumatologists to consider how they can share current information to improve gout management.[7] It is this concept of sharing current information on the management of gout that is the main impetus for the preparation of this review. It is hoped that pharmacists will be empowered with this knowledge to assist the prescribing clinician to maximize patient outcomes when treating gout. First, to serve as a foundation, new insights into the pathogenesis of hyperuricemia and gout will be discussed. Second, risk factors, typical presentation of symptoms, and key diagnostic parameters will be reviewed so that the pharmacist may achieve an appreciation of the disease. Finally, nonpharmacologic treatment modalities and both current as well as newer investigational therapeutics will be offered so that the pharmacist may facilitate greater patient adherence through medication counseling.
Pathogenesis
Biologically significant hyperuricemia occurs when serum urate levels exceed solubility (~6.8 mg/dL). Hyperuricemia is a common serum abnormality that does not always progress to gout. Humans generate about 250 to 750 mg of uric acid per day. The uric acid comes from dietary purines and the breakdown of dying tissues. The exact cause of gout is not yet known, although it may be linked to a genetic defect in purine metabolism. Uric acid, the most insoluble of the purine substances, is a trioxypurine containing three oxygen groups. The pathogenesis of gout starts with the crystallization of urate within the joint, bursa, or tendon sheath, which leads to inflammation as a result of phagocytosis of monosodium urate crystals; the disease is usually associated with an elevated concentration of uric acid in the blood.[2,8] Specifically, uric acid is a breakdown product of the purines adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are found in both DNA and RNA. Hypoxanthine and xanthine are not incorporated into the nucleic acids as they are being synthesized, but they are important intermediates in the synthesis and degradation of the purine nucleotides. Both undissociated uric acid and monosodium salt, which is the primary form found in the blood, are only sparingly soluble.
The amount of urate in the body depends on the balance between dietary intake, synthesis, and excretion.[9] In people with primary gout, defects in purine metabolism lead to hyperuricemia, or high levels of uric acid in the blood. This can be caused by increased production of uric acid, abnormal retention of uric acid, or both. Urate in the blood can accumulate either through an overproduction or an underexcretion of uric acid. Hyperuricemia results from the overproduction of urate found in 10% of gout patients and from underexcretion of urate found in the remaining 90%.[9] The majority of patients with endogenous overproduction of urate have the condition as a result of salvaged purines arising from increased cell turnover in proliferation and inflammatory disorders, from pharmacologic intervention resulting in increased urate production, and from tissue hypoxia.[9]
The renal mechanism for handling urate is one of glomerular filtration followed by partial tubular reabsorption.[10] The final fractional excretion of uric acid is about 20% of what was originally filtered. Uric acid levels independently predict renal failure in patients with preexisting renal disease. Hyperuricemia causes interstitial and glomerular changes that are independent of the presence of crystal, and the changes very much resemble what hypertensive changes would look like chronically. In addition, serum hyperuricemia is epidemiologically linked to hypertension and seems to be an independent factor for the development of hypertension. Finally, hyperuricemia is defined as a serum uric acid level greater than 6.8 mg/dL. Serum uric acid can be normal, especially during the gout attack. The target goal for uric acid treatment is to achieve a level less than 6.0 mg/dL.
Risk Factors for Gout
A number of references by Choi et al have identified, explained, and reviewed the risk factors for the development of gout.[11-13] Nonmodifiable risk factors include being a male or a postmenopausal female, genetic influences, end-stage renal disease, and resulting major organ transplantation. Its prevalence increases with age, from 1.8/1,000 in people under the age of 45 years to 30.8/1,000 in those over age 65.[8] Elevated serum urate levels are also associated with increased risk.[8] Hypertension is a definite risk factor, as a significant percentage of patients with hyperuricemia will develop hypertension. Hyperuricemia and gout have been linked to other disease states including metabolic syndrome, cardiac disease, stroke, and renal disease.[8] The risk of gout correlates with truncal obesity, as measured by body mass index and waist-to-hip ratios.[8,11]
Avoidable risk factors include diet and medications. Foods that have been implicated in causing gout are red-organ meats, seafood, and foods containing high-fructose corn syrup. Fructose has been recognized as a cause of hyperuricemia.[8,14-16] Choi et al conducted a small prospective study that investigated the ability of diets high in fructose to induce higher serum urate levels relative to diets high in glucose or low in carbonates.[16] High alcohol intake, especially beer, is also a risk factor. The presence of guanosine in beer has been identified as the cause of gouty attacks.
Certain drugs used to treat gout, particularly thiazide diuretics and the cyclosporine administered to transplant patients, have been implicated with gouty attacks. Despite the cardioprotection offered by low-dose 81-mg aspirin, this drug may be associated with the precipitation of gout.[8,17] Commonly, the use of cyclosporine has been reported to cause a rapidly occurring type of gout, swiftly ascending and polyarticular in many cases. Roubenoff validates that these risk factors are increasing by reporting that gout incidence and prevalence have increased by twofold from 1970 to 1990.[18] Furthermore, Wallace et al have reported that the prevalence of gout has increased by two cases per 1,000 patients during the 1990s because of lifestyle changes.[19]
Typical Presentation of Gout
Gouty attacks are usually associated with a precipitating event.[6,20] These attacks consist of intense pain involving the lower extremity, with 80% of first attacks involving a monoarticular joint; however, after long periods of time, gout attacks may become polyarticular.[6,20,21] This pain and inflammation is a result of a dramatic inflammatory response. Some authors have estimated that between 50% to 90% of the initial attacks occur in the first metatarsophalangeal joint (podagra).[6,21-23] In postmenopausal women, the distal interphalangeal joints may be involved.[6] Attacks often occur at night and are associated with a precipitating event.[6,20] Acute gouty arthritis may be accompanied by low-grade fever, chills, and malaise.[6,21,23] The majority of patients experience a second acute gout attack within 1 year of the first episode.[24] Untreated initial acute gout attacks resolve completely within 3 to 14 days.[6,20,23]
There are four clinical stages of gout.[23] At serum urate concentrations greater than 6.8 mg/dL, urate crystals may start to deposit. Hence, the first stage of gout is known as asymptomatic hyperuricemia. During this first period, urate deposits may directly contribute to organ damage. After sufficient urate deposits have developed around a joint and some traumatic event triggers the release of crystals into the joint space, a patient will suffer an acute gout attack and move into the second stage, known as acute gouty arthritis. During this second stage, acute inflammation in the joint caused by urate crystallization and crystal phagocytosis is present. This episode is known as a "flare" and is self-resolving and likely to recur. The interval between acute flare gout attacks with persistent crystals in the joints is the third stage and is known as an intercritical period. When crystal deposits continue to accumulate, patients develop chronically stiff and swollen joints leading to the final stage—advanced gout, which includes the long-term complications of uncontrolled hyperuricemia characterized by chronic arthritis and tophi. The nodular mass of uric acid crystals is described as a tophus and is characteristically deposited in different soft tissue areas of the body in gout. This advanced stage of gout is uncommon because it is avoidable with interventional therapy.[23]
Diagnosis
The diagnosis of gout can be straightforward. The only way to establish the diagnosis with certainty is to demonstrate uric acid crystals in synovial fluid or tophi.[7] Polarizing microscopic examination of synovial fluid reveals negatively birefringent crystals, confirming the diagnosis of gout. It must be recognized that normal uric acid levels are observed in approximately 50% of acute gouty flares.[7]
Dalbeth and McQueen's review summarizes recent advances in plain radiography and advanced imaging for gout, calcium pyrophosphate dihydrate crystal arthropathy, and basic calcium phosphate crystal arthropathy.[24] They suggest that high-resolution ultrasonography may improve noninvasive diagnosis of the crystal-induced arthropathies and allow for monitoring of intra-articular tophi. They also determined that computed tomography provides excellent definition of tophi and bone erosion, and three-dimensional computed tomography assessment of tophus volume is a promising outcome measure in gout.[24] Finally, they state that magnetic resonance imaging is also a reliable method for assessment of tophus size in gout and has an important role in detection of complications of the disease in clinical practice.
Treatment of Gout
Key elements necessary to improve clinical outcomes in gout management include enhancing health professional and patient education as well as exploring novel urate-lowering agents. One of the most valuable health care professionals when assisting clinicians in the treatment of gout is the pharmacist. Pharmacists can appreciate that the optimal treatment for gout requires both adjunctive nonpharmacologic as well as pharmacologic interventional therapies ( Table 1 ). A practicing pharmacist is directed to read the Becker and Chohan editorial, which suggests that successful gout management is possible by embracing the 12 evidence-based recommendations from the European League Against Rheumatism (EULAR) ( Table 2 ).[25,26] Treatment and prevention of acute gout flares, as well as the management of hyperuricemia and gout, can best be appreciated by patients with a brief narrative and be accentuated graphically through easy-to-read tables that the pharmacist may access quickly when a question arises.
A treatment regimen must be individually tailored to each patient. The treatment of gout has three main components: therapy of the acute attack, prophylaxis against gout flares, and management of hyperuricemia.[8] Several aspects must be independently considered when planning to treat a patient with gout. Given that gout is a reversible urate crystal deposit disease, the main objective is to eliminate the urate crystals from the joints and other structures.[27] Li-Yu et al determined through a 10-year prospective investigation that serum urate levels should be reduced to below 6.0 mg/dL in order to eliminate crystals.[28]
Pharmacotherapy for Acute Gout Attacks
Medications used to treat an acute gout attack include nonsteroidal anti-inflammatory drugs (NSAIDs), colchicine, and corticosteroids. A combination of these may also be necessary. A summary of the pharmacologic agents used to treat acute gout is shown in Table 3 .[6-8,25-27] These medications have no effect on the serum uric acid level. The classic antidote for gouty arthritis is colchicine. The most common associated adverse drug event reported with colchicine use is diarrhea.[6] However, even low-dose colchicine may be associated with severe adverse effects and toxicity such as myopathy and myelosuppression.[6,8,27] Monitoring of serum troponin levels during an acute colchicine overdose may help avoid vascular collapse.[29]
Guidelines indicate that fast-acting oral NSAIDs should be used during acute attacks if there are no contraindications.[6-8,26] Since there are no significant clinical differences among NSAIDs, the choice of agent should be based on the agent;s side-effect profile, it;s cost, and patient;s ability to adhere to the prescribed agent.[6,8,25,27] Suppressive therapy to prevent flares usually involves colchicine or NSAIDs.[8] An important factor in choosing therapeutic agents for an acute attack is the presence of comorbidities. The most common therapy for acute gout in the setting of acute or chronic renal or hepatic failure is corticosteroids.[8] If NSAIDs and colchicine are contraindicated because of patient comorbidities, intra-articular aspiration and injection of a corticosteroid is an effective treatment of an acute attack of gout once the possibility of a septic joint has been eliminated.[6]
Urate-Lowering Therapy
The therapeutic goal of urate-lowering therapy is to promote dissolution of the urate crystals and to prevent crystal formation.[6,28] In addition, urate-lowering therapy is used to prevent disease progression, reduce the frequency of acute attacks, and maintain and improve quality of life. Treatments for chronic gout are aimed at reducing serum urate levels to less than 6.0 mg/dL in order to dissolve existing crystals and prevent formation of new ones.[8] Dore recommends that patients who overproduce urate should be treated with allopurinol, as this drug has the advantage of being effective for both overproducers and underexcretors.[6] Patients who underexcrete urate despite near-normal creatinine clearance levels should be treated with uricosurics. Urate-lowering therapy should be lifelong. If an acute flare occurs when urate-lowering therapy is initiated, therapy should not be discontinued, because doing so will result in fluctuating urate levels.[30] Initiating urate-lowering therapy can mobilize urate deposits, which may precipitate an attack because of rapid serum uric acid lowering.[7] The practice of using concomitant gastroprotectant NSAID and colchicine prophylaxis with the initialization of urate-lowering therapy has been suggested.[7,25]
Pharmacists can be a tremendous resource by informing the clinician about potential drug interactions and side effects of urate-lowering agents to maximize therapeutic outcomes. Finally, pharmacists must remember that treatment of asymptomatic hyperuricemia is not recommended.[7]
Since 1965, one traditional approach to the treatment of gout has been the drug allopurinol, an isomer of hypoxanthine. Allopurinol is a substrate for xanthine oxidase. The product binds so tightly that the enzyme is now unable to be oxidized in its normal substrate. Uric acid production is diminished, and xanthine and hypoxanthine levels in the blood rise. These are more soluble than urate and are less likely to deposit as crystals in the joints. The allopurinol dose must be adjusted in patients with renal impairment.[6] Allopurinol is often started at 100 mg per day, and the daily dosage is increased in 100 mg increments every 2 to 4 weeks.[8,25,26] The usual dosage range for allopurinol is 200 to 300 mg/day for mild gout and 400 to 600 mg/day for cases of moderate and severe gout. Up to 5% of patients are unable to tolerate allopurinol due to adverse effects such as rash, nausea, and bone marrow suppression.[31] If a severe rash occurs, the pharmacist should advise discontinuation of allopurinol. Allopurinol has fewer drug interactions than uricosuric agents.[6] Despite allopurinol's limitations, it is used extensively for most gouty patients and is considered safe and effective.[27]
Emerging Therapies
Uricosurics are considered second-line therapy for patients who are intolerant to allopurinol. Of all the older urate-lowering drugs, probenecid or sulfinpyrazone may be used in patients refractory to allopurinol therapy.[27] In the U.S., probenecid is the only potent uricosuric agent available.[8] Probenecid is most useful in patients with mild gout and normal renal function. Its mechanism of action is inhibition of the uric acid transporter (URAT1) involved in the reabsorption of uric acid.[8] Uricosuric therapy is contraindicated in patients with a history of nephrolithiasis and is not effective in patients with a creatinine clearance of less than 50 mL/min. Finally, both losartan and fenofibrate have slight uricosuric properties and may be useful as adjunctive therapy in gout patients with comorbidities of hypertension and hyperlipidemia.[6,32]
Febuxostat is a potent, new selective xanthine oxidase inhibitor that received FDA approval in February 2009 for the management of hyperuricemia in patients with gout.[8,33,34] This agent is not a purine analog and has a mechanism similar to that of allopurinol. The recommended starting dose of febuxostat is 40 mg once a day. For patients who do not achieve a serum uric acid less than 6 mg/dL after 2 weeks with 40 mg, increasing the dose to 80 mg is recommended.[34] Febuxostat has demonstrated efficacy superior to allopurinol.[27,34] It is primarily metabolized by the liver and may be an alternative agent for patients with renal insufficiency. The adverse-effect profile for febuxostat includes elevation in liver enzymes, rash, diarrhea, and headache.[8] The manufacturer has reported that febuxostat has a higher rate of risk of cardiovascular thromboembolic events compared to allopurinol.[34] Finally, febuxostat is contraindicated in patients treated with azathioprine, mercaptopurine, or theophylline.[34]
Uric acid oxidase, also known as uricase, is an enzyme that catalyzes the conversion of uric acid to allantoin and is present in all mammals except humans and higher primates.[27] There is interest in using uricase therapies to lower serum uric acid. Rasburicase, a recombinant uricase IV product indicated for tumor lysis syndrome, might be successfully used in unusually severe cases of gout.[35] Rasburicase has a black box warning for anaphylaxis, hemolysis, and methemoglobin. Pegloticase (pegylated recombinant porcine uricase) has also shown urate-lowering efficacy.[14,36] Adding polyethylene glycol (PEG) prolongs the half-life of uricase and decreases the antigenicity. Intravenous administration of PEG-uricase has been investigated for the potential treatment of severe tophaceous gout in patients who are hypersensitive to allopurinol.[37]
Pharmacists should appreciate the relative contraindications to both NSAIDs and corticosteroids as symptomatic therapies. Therefore, attention has been directed to the recent advances in the understanding of gouty inflammation and the proinflammatory role of several cytokines in the pathophysiology of acute gout.[25,38] Early small clinical trials have identified interleukin-1B as the most prominent in acute gout. The practice of inhibiting interleukin-1 may be efficient and safe in terminating the symptoms of acute gouty arthritis.[25]
Conclusion
Gout is a monosodium urate, monohydrate crystal deposit disease. It was among the earliest diseases to be recognized as a clinical entity. Clinical pharmacists need to be empowered with knowledge to assist prescribing clinicians in order to maximize therapeutic outcomes when treating gout. To achieve this goal, a foundation of new insights into the pathogenesis of hyperuricemia and gout has been reviewed. Risk factors, typical presentation of symptoms, and key diagnostic parameters have been offered so that pharmacists can achieve an appreciation of gout as a significant disease. Both nonpharmacologic modalities and pharmacologic therapies have been discussed so that greater patient adherence through medication counseling can be achieved.
From Medscape Medical News
DASH Diet Plus Exercise Improves Cognitive Function in Sedentary Obese Patients
Pauline Anderson
March 11, 2010 — The low-sodium, low-fat Dietary Approaches to Stop Hypertension (DASH) diet, combined with aerobic exercise and caloric restriction, improves neurocognitive function among sedentary, overweight, or obese patients with high blood pressure, a new study has found.
The study also shows that the beneficial effects of this combined approach are particularly pronounced in subjects with higher carotid artery intima-media thickness (IMT) who are at higher risk for a stroke.
"The present findings could have important implications for improving neurocognitive function among older adults with HBP [high blood pressure], at greater risk for cognitive decline and Alzheimer's disease," write Patrick J. Smith, from the Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, North Carolina, and colleagues. "Future studies should therefore examine the effects of diet and exercise in adults at elevated risk for dementia."
As part of the larger ENCORE (Exercise and Nutrition intervention for CardiOvasculaR hEalth) study, 124 participants with elevated blood pressure (systolic 130 - 159 mm Hg or diastolic 85 - 99 mm Hg) who were sedentary and had a body bass index of 25 to 40 kg/m2 were randomly assigned to the DASH diet alone, the DASH diet combined with a behavioral weight management program, or to a usual diet control group. The patients were not taking antihypertensive medications.
Patients in the DASH-alone group received instructions about modifying their diet but did not exercise or lose weight. The DASH plus weight management group received the same DASH dietary advice and participated in a weight management program that consisted of a 30-minute supervised aerobic exercise program 3 times a week and weekly group counseling sessions focused on behavioral weight loss strategies. Patients in the diet control group maintained their usual dietary habits, did not lose weight, and did not exercise during the 4-month study.
At baseline, researchers used high-resolution ultrasound studies to measure the IMT of the left and right common carotid arteries. At baseline and after 4 months, participants completed a battery of neurocognitive tests to assess performance in the domains of Executive Function-Memory-Learning (EFML) and Psychomotor Speed.
Adherence rates were excellent: participants in the DASH plus weight management group and DASH-alone group attended 92% of the information classes, and those in the DASH plus weight management group attended 90% of the exercise sessions.
Significant Improvements Noted
The DASH plus weight management group had improved EFML relative to the control group (effect size [ES], 0.21; 95% confidence interval [CI], 0.03 - 0.39; Cohen's d = 0.562; P = .008), although the DASH-alone group did not improve relative to the control group.
The improvements in the DASH plus weight management group were comparable to a 14.6-year improvement in predicted age for Trail Making Test B-A performance (speed of drawing consecutive lines between numbers and between numbers and letters), and a 6.1-year improvement for Stroop Interference performance (speed of identifying and differentiating colors and words from lists). In contrast, the control group's performance was comparable to a 9.4-year poorer performance for Trail Making Test B-A and an 11.7-year poorer Stroop Interference performance.
Similar results were observed for the Psychomotor Speed with the DASH plus weight management group (ES, 0.18; 95% CI, 0.02 - 0.33; Cohen's d = 0.480; P = .023) and DASH alone group (ES, 0.15; 95% CI, 0.00 - 0.30; Cohen's d = 0.440; P = .036) exhibiting significant improvements relative to the control group.
The researchers found that the participants with greater IMT, and therefore in poorer vascular health, and higher systolic blood pressure showed greater improvements in EFML in the DASH plus weight management group.
"Our finding that both SBP [systolic blood pressure] and IMT moderated the effects of diet and exercise on neurocognition suggests that individuals with vascular disease may be especially likely to benefit from aerobic exercise and diet," the study authors write.
Improvements in EFML in the DASH plus weight management group were mediated by improved cardiorespiratory fitness, whereas improvements in Psychomotor Speed were mediated by weight loss.
Subjects in both treatment groups had lower blood pressure vs the control group, with the DASH plus weight management group having the greatest reduction. The DASH plus weight management group lost the most weight and had the best aerobic capacity.
Possible Protection Against Alzheimer's Disease?
It is not known whether the benefits uncovered by this study can be maintained with time or whether the intervention could affect rates of Alzheimer's disease, said the study authors. Although the study does not reveal the mechanisms for improved cognitive function, "it's possible that the observed improvements in neurocognitive function could be mediated by other factors such as inflammation, growth factors, or other neurochemical changes," they write.
Other healthy diets such as the Mediterranean diet may also be beneficial, they said.
According to background information in the study, an estimated 1 billion men and women worldwide have prehypertension or hypertension. High blood pressure affects 50% of adults 60 years and older and has a lifetime prevalence of 90%. High blood pressure is associated with an increased risk for Alzheimer's disease, mild cognitive impairment, and vascular dementia.
Other studies have shown that lifestyle changes, including diet and exercise, reduce blood pressure and weight, improve neurocognitive function, and may protect against incident Alzheimer's disease, but this study is believed to be the first randomized clinical trial to examine the combined effects of dietary modification and aerobic exercise on neurocognitive function among overweight individuals with high blood pressure.
The study authors have disclosed no relevant financial relationships.
Circulation. Published online March 8, 2010.
Top
From Rheumatology
Uric Acid and Evolution
Bonifacio Álvarez-Lario; Jesús Macarrón-Vicente
Posted: 01/02/2011; Rheumatology. 2010;49(11):2010-2015. © 2010 Oxford University Press
Abstract and Introduction
Abstract
Uric acid (UA) is the end product of purine metabolism in humans due to the loss of uricase activity by various mutations of its gene during the Miocene epoch, which led to humans having higher UA levels than other mammals. Furthermore, 90% of UA filtered by the kidneys is reabsorbed, instead of being excreted. These facts suggest that evolution and physiology have not treated UA as a harmful waste product, but as something beneficial that has to be kept. This has led various researchers to think about the possible evolutionary advantages of the loss of uricase and the subsequent increase in UA levels. It has been argued that due to the powerful antioxidant activity of UA, the evolutionary benefit could be the increased life expectancy of hominids. For other authors, the loss of uricase and the increase in UA could be a mechanism to maintain blood pressure in times of very low salt ingestion. The oldest hypothesis associates the increase in UA with higher intelligence in humans. Finally, UA has protective effects against several neurodegenerative diseases, suggesting it could have interesting actions on neuronal development and function. These hypotheses are discussed from an evolutionary perspective and their clinical significance. UA has some obvious harmful effects, and some, not so well-known, beneficial effects as an antioxidant and neuroprotector.
Introduction
Unlike the majority of mammals, uric acid (UA) is the end product of purine metabolism in humans, due to the loss of uricase activity during the evolution of hominids.[1, 2] This loss, together with UA balance in the kidney, in which the majority of filtered UA is reabsorbed, and the lifestyle and eating habits of developed countries, has led to a high prevalence of hyperuricaemia and its consequences.[1–4] Hyperuricaemia is the primary risk factor for developing gout and this risk increases exponentially when the serum UA levels rise.[2, 5, 6] However, only a minority of those with high UA levels will develop gout.[1, 3, 7] Along with its association with gout, there is increasing evidence of a relationship between hyperuricaemia and hypertension, renal disease, metabolic syndrome, diabetes and cardiovascular disease.[1–6]
Regulation of serum UA levels is complex, with diet and various genetic polymorphisms of renal urate transporters being the main causal factors of hyperuricaemia and gout.[1, 3, 8] The importance of the interaction between genetic factors and lifestyle in the development of hyperuricaemia and gout has a clear example in the Maori of New Zealand.[3, 9] This population has a marked predisposition to develop hyperuricaemia and gout, because of a genetic defect in renal urate handling.[10–12] However, there is no mention of gout among them before the 18th century. The lean and strong ancient Maori ate a diet of sweet potato, taro, fern root, birds and fish. After the introduction of a diet low in dairy products and high in fatty meats and carbohydrates in the early 1900s, an epidemic of obesity and gout developed [9]. The drastic changes in their diet and the adoption of the lifestyle of developed countries, has led them to have the highest gout prevalence in the world. This demonstrates that genetically predisposed people will develop hyperuricaemia and gout if they are exposed to other risk factors, such as a high-purine content diet, obesity, increased alcohol consumption or diuretic use.[3, 13, 14]
The importance of genetic and environmental factors, which have been mentioned before, is determined by the loss of the enzyme uricase, which took place during human evolution. The majority of mammals have very low serum urate levels because UA is converted by uricase to allantoin, a very soluble excretion product, which is freely eliminated by the urine.[15] The allantoin in most fish and amphibians is degraded via allantoic acid by allantoinase and allantoicase to urea and glyoxylate. In some marine invertebrates and crustaceans, the urea formed is hydrolysed to NH3 and CO2 by urease[15] (Fig. 1). The lack of uricase makes UA the end product of purine metabolism in humans and other higher primates[1, 2] and is the main reason why serum UA levels in adult males are ~6.0 mg/dl, compared with the majority of mammals who have UA levels <0.5–1 mg/dl.[16–18] This makes us particularly susceptible to changes induced by diet [6], and hence this is the main reason for humans to be the only mammals who develop gout spontaneously.[3]
Click to zoom
(Enlarge Image)
Figure 1.
Schematic diagram of purine metabolism.
The origin of uricase is very old, being present in a great variety of organisms, from bacteria to mammals and it has different metabolic activities depending on the host organism. There is a cross-reaction between the uricases of different species, having the same tissue specificity and cell location, as well as similar molecular weight. Hence, it suggests that the uricases of diverse species have a common evolutionary origin.[19, 20]
Humans, some higher primates and certain New World monkeys do not show any detectable level of uricase activity. This is due to the appearance of several mutations of its gene during the evolutionary process, which made it non-functional.[21] In other monkeys in the Old and New Worlds, uricase activity is moderate, between two and four times lower than that in mice and rabbits,[20] and also less stable.[21]
Wu et al. [21] identified three mutations in the uricase gene in humans, chimpanzees and gorillas, including two nonsense mutations, one of codon 33 and another of codon 187, and a mutation in the splice acceptor signal of exon 3. The codon 33 mutation is also present in the orangutan. Based on the phylogeny of human evolution, Wu et al. [21] established that the codon 33 mutation happened 24 million years ago; the mutation of codon 187 took place 16 million years ago, when the orangutan had already followed another line; and the exon 3 mutation occurred 13 million years ago, affecting the human/gorilla/chimpanzee line.[21]
Later on, Oda et al. [20] did not find any uricase activity in humans, chimpanzees, gorillas, orangutans or gibbons, but they find functional uricase in other monkeys, such as baboons and rhesus monkey. They found up to eight independent nonsense mutations in hominids without uricase activity. They mainly attribute the loss of uricase activity to the nonsense mutation of codon 33 of exon 2, dating it to 15 million years ago. The promoter region of the gene had probably already been degraded in the evolutionary process by previous mutations, being more likely a gradual loss of uricase activity rather than a single step loss.[20, 22] This is reasonable because the inactivation of the uricase gene in the mouse causes a pronounced hyperuricaemia nephropathy due to urate, resulting in more than half the mutant mice dying before 4 weeks of age.[23] A gradual loss of activity would allow adaptation measures to the new situation to be developed.[22]
Schematic diagram of purine metabolism
Conclusions
The reason is still not clear why the evolutionary process of hominids strived to lose uricase activity and increase UA levels. UA is mainly known for its harmful effects such as gout and uric lithiasis, as well as its association with hypertension, metabolic syndrome, renal disease and cardiovascular disease.[4, 25] Less well known are its beneficial effects as a powerful antioxidant,[16, 26] its neuroprotective activity[52–55] and, from the data on the evolution of hominids, it is likely that it has other not very well-known important physiological effects. The initial signs and symptoms of hyperuricaemia are not life threatening, have an excellent treatment and few patients with hyperuricaemia end up developing gout. On the other hand, treatment with allopurinol is not free of serious adverse effects.[59, 60] These observations, plus the lack of a well-established causal role of hyperuricaemia in other associated diseases, have restricted the enthusiasm to routinely treat the majority of patients having asymptomatic hyperuricaemia with UA-lowering drugs. Asymptomatic hyperuricaemia is currently not considered as an indication for treatment.[2–4, 59] Due to the increasing evidence of the association of UA with hypertension and cardiovascular diseases, it is likely that the indications for treating hyperuricaemia will be extended in patients with other risk factors. When making these decisions, the positive effects of a reduction in UA should be weighed up against the possible negative effects in neurodegenerative diseases.
Sidebar
Rheumatology Key Messages
The biological reason for the loss of uricase activity and increased levels of UA in humans and certain primates is unknown.
UA is one of the most important antioxidants in human biological fluids.
UA probably has neuroprotective activity.
Top
From International Journal of Clinical Practice
Update on Gout and Hyperuricemia
J. F. Baker; H. Ralph Schumacher
Posted: 02/22/2010; Int J Clin Pract. 2010;64(3):371-377. © 2010
Abstract and Introduction
Abstract
There have been recent advances in the understanding of underlying mechanisms and treatment of gout and chronic hyperuricemia, making this an important time to review the current state of the disease. The goal of this article is to provide a practical review of the current standard of care as well as discuss some new developments in the management. There is an increasing prevalence of gout and hyperuricemia worldwide. Gout confers a significant individual and societal burden and is often under-treated. Appropriate diagnosis and treatment of acute gout should be followed by aggressive and goal-oriented treatment of hyperuricemia and other risk factors. Allopurinol remains as a first-line treatment for chronic hyperuricemia, but uricosuric agents may also be considered in some patients. Febuxostat, a non-purine xanthine-oxidase inhibitor, is a new agent approved for the treatment of hyperuricemia in patients with gout, which may be used when allopurinol is contraindicated. Gout and hyperuricemia appear to be independent risk factors for incident hypertension, renal disease and cardiovascular disease. Physicians should consider cardiovascular risk factors in patients with gout and treat them appropriately and aggressively.
Introduction
The recent approval of febuxostat, the first new agent for gout in 40 years for use in the treatment of chronic hyperuricemia associated with gout, lead the editors of the IJOP to request this review to address current guidelines for the diagnosis and management of gout, some new developments for consideration by the treating physician and the epidemiology of this increasing international health concern. The goal of this review is to provide the practicing physician with a practical overview of the state of gout worldwide with specific attention to management strategies, both current and novel.
Epidemiology
Gout is the most common form of inflammatory arthritis in men > 40 years of age, often presenting initially in the form of podagra (acute onset of pain, erythema and swelling of the first metatarsophalangeal joint). Women may develop gout later in life, and in women it is more likely to involve the upper extremities. The lifetime prevalence of gout in the United States has been estimated at 6.1 million, and studies in the UK have reported a prevalence approaching 7%.[1,2] Hyperuricemia is significantly more prevalent. For example, it is now present in as many as 25% of people in China (defined for that study as serum urate > 420/> 360 ?mol/l in men and women, respectively).[3] The prevalence of gout and hyperuricemia has been increasing over the past few decades in response to a number of factors.
An elevated serum uric acid level (SUA) is perhaps the most highly correlated laboratory value with the metabolic syndrome,[4] which is a concern with global westernisation of diet, increasing access to high caloric foods and greater prevalence of obesity.[5] Increasing life expectancy and use of predisposing medications, such as diuretics, may also contribute to this trend. Recent evidence suggests that the intake of fructose in beverages and foods, which has also increased worldwide, may increase the risk of both metabolic syndrome and gout.[6,7]
As a result of this global trend, it will be important to establish the wide use of safe, inexpensive and effective approaches to prevent and treat gout worldwide. Close attention to risk factors for gout such as high-purine diet, alcohol use, obesity, diabetes and kidney disease will be important in preventing and controlling an epidemic of hyperuricemia and gout, but it is unlikely to be sufficient.
The Burden of Gout
Patients with acute gout experience significant pain and swelling, which can severely impair quality of life (QOL). Long-term complications from gout can also impair QOL, as patients may develop chronic debilitating arthritis and loss of function. Using a variety of distinct validated measures, patients with gout have been shown to experience a significant overall reduction in QOL.[8]
Gout is also associated with health care and economic costs.[9] It has been estimated that the direct burden of illness for new cases of acute gout may be as high as $27 million in the United States.[10] Care of chronic gout represents approximately 6% of a patient's all-cause yearly healthcare costs.[11] A diagnosis of gout is independently associated with higher medical and arthritic comorbidity as well as higher utilisation of health care.[12] Gout is also associated with significant costs to employers. Patients with gout use more absence days and are less productive.[13] This observation again underscores the need for inexpensive, yet effective means of prevention and treatment.
Diagnosis
One important key to the early and effective management of gout is an accurate diagnosis. EULAR recommendations have been made regarding the sensitivity and specificity of certain clinical features and their use in establishing a diagnosis of gout.[14] The history of episodic self-limited joint pain, swelling and erythema is highly sensitive for clinical gout, but not specific for gout. More specific, but still not diagnostic, features for gout include a history of podagra and the presence of a suspected tophus. There is the reasonable specificity of about 80–90% for these clinical markers in making a provisional diagnosis.[15] If however, the course and response to appropriate treatment is not as anticipated, it is recommended that undiagnosed inflamed joints be examined by an experienced laboratory for monosodium urate (MSU) crystals as this permits a definite diagnosis.[15] Identification of MSU crystals in synovial fluid from asymptomatic joints may also allow definite diagnosis.[14,16]
Serum uric acid levels, although elevated at some time in all patients with gout and helpful in diagnosis, should not be relied upon solely in the diagnosis of gout as they may be normal during an acute flare, and hyperuricemia can be present in asymptomatic individuals.[17] Radiographs are not typically useful early in the diagnosis of acute gout although they may help to rule out other causes of joint pain and swelling.
Even with visualisation of crystals, other coexisting causes of joint pain and swelling should be considered, such as trauma and infection. Septic arthritis and gout have been described together, although the occurrence is rare.[18,19]
In patients diagnosed with gout, care should be taken to assess for underlying risk factors for the development of hyperuricemia and gout, such as features of the metabolic syndrome, chronic kidney disease and diuretic use. In patients with the onset of gout under the age of 25, with a family history of young-onset gout, or with a history of renal calculi, renal uric acid excretion should be determined to assess for urate overproduction.
Treatment of Acute Gout
Current published guidelines, including those of EULAR, suggest the use of oral colchicine or non-steroidal anti-inflammatory drugs (NSAIDs) as the first-line systemic treatment for acute gout[20] (Table 1). The use of oral prednisolone (35 mg daily) has recently been shown to be comparably effective to Naproxen 500 mg twice daily in a randomised trial[21] and is often preferred for polyarticular gout. As patients frequently have comorbidities associated with hyperuricemia and gout, risks and benefits of these systemic treatments should be considered in the individual patient. For instance, uncontrolled diabetes and active infection are often contraindications for systemic corticosteroids, while NSAIDs should be avoided in patients with chronic kidney disease. High doses and hourly use of colchicine should be avoided, if possible, because of high frequency of toxicity. However, use of this medication at low doses (0.6 mg 2–3 times daily) is widely accepted and may be sufficient for some patients, although the efficacy of this approach has not been demonstrated in controlled trials.
An alternative to systemic treatment is intra-articular injection, which is considered safe and effective. This modality has not been well studied, but in one uncontrolled trial, all 19 patients receiving intra-articular depot corticosteroid injections improved within 48 h.[22] This approach is less favoured when multiple joints are involved, or a site is involved that is not easily amenable to aspiration. Systemic and intra-articular steroids should be avoided if septic arthritis is suspected.
Physicians should also include patient education regarding lifestyle in the plan for prevention of subsequent flares. Patients should be encouraged to lose weight and counselled to avoid excessive consumption of animal purines, high-fructose sweeteners and alcohol. In a minority of patients, these interventions may be enough to lower SUA levels and to prevent further attacks of gout.
Pharmacotherapy for Chronic Hyperuricemia in Gout
Treatment with agents to lower SUA is recommended for patients with recurrent attacks, polyarticular attacks, tophaceous gout, radiographical joint damage and/or severe hyperuricemia (Table 2). It is not recommended to treat asymptomatic hyperuricemia without one of these features of gout as the risks and benefits of such an intervention have not been clarified. Although the possible need for lowering of uric acid levels should be mentioned at the time of diagnosis, urate-lowering therapy should, in most cases, be initiated after resolution of an acute gouty attack. However, there is no specific evidence to support the widely accepted belief that acute flares of gout may worsen with immediate initiation of treatment, and this can be considered along with anti-inflammatory therapy in some patients.
The current guidelines suggest treating to a SUA goal below the saturation point for MSU of approximately 6.8 mg/dl, to achieve a level < 6 mg/dl (< 360 umol/l). Patients will over time note a reduction in clinical flares when maintained at this concentration, as urate stores are depleted. Achieving this goal requires frequent adjustment of doses of allopurinol or other urate-lowering agents with close attention to SUA levels. Allopurinol should be started at low dose (100 mg) and increased every 2–4 weeks to as much as 800 mg/day as required to reach the above goal. Prophylactic daily use of 0.5 mg once or twice a day colchicine or low dose NSAIDs is appropriate during the first 6 months of urate-lowering therapy or until resolution of tophi. Acute flares can occur during urate lowering and may interfere with patient adherence. Patients with tophi should be aware that resolution is slow and may take several years in some cases after SUA levels have met their treatment goal. Tophi that are complicated by infection or deformity and are difficult to treat with medication may require surgical excision or debridement.
Allopurinol is the first-line therapy for most patients. The side effects of allopurinol are largely limited to rash and fever, however, the allopurinol hypersensitivity syndrome (AHS) can be life threatening, occurring in an estimated 0.1% of those exposed.[23,24] Chronic kidney disease has been proposed as a risk factor for the development of AHS, and there have been guidelines for dosing of allopurinol in renal insufficiency. These were suggested by Hande et al.[25] in 1984, who reported that most patients who developed AHS at their institution had pre-existing renal impairment and were on full treatment doses of allopurinol (> 300 daily). Unfortunately, these recommendations for dose adjustment of allopurinol may limit the number of patients who attain an optimal SUA level. A study by Dalbeth et al.[23] found that target SUA concentrations were reached in only 28% of their patients on such recommended doses of allopurinol, whereas 60% reached that goal safely when on higher doses. More recent data would also challenge the current dosing recommendations.[26] Two recent large case-control studies found no difference in the dosing of allopurinol between patients who had developed AHS and those who were tolerant to allopurinol.[27,28] In addition, there are no prospective data to suggest that dose-adjustment results in a decreased risk of AHS. Understanding these uncertainties, the gradual escalation to more aggressive dosing regimens for many patients may be appropriate to avoid under-treatment.
Probenecid and other uricosuric agents may also be used in patients who are under-excreters of uric acid with otherwise normal renal function and are likely to comply with increased oral fluid intake needed to decrease the risk of stone formation. Benzbromarone is another, probably more potent uricosuric that is available in only a few countries but seems more effective, even in patients with mild renal disease.[29] Both medications act on the recently defined URAT1 transporter to decrease urate reabsorption from the renal proximal tubule.[30] Medications that may increase SUA such as thiazide diuretics should be discontinued if possible.
Under-treating hyperuricemia may have significant consequences to patients including increased number and frequency of gout flares, resultant decreased QOL and productivity, and increased use of NSAIDs and systemic steroids. Other possible risks of under-treated hyperuricemia include worsening of endothelial dysfunction, hypertension, renal disease, systemic inflammation and increased cardiovascular risk. These considerations will be discussed in greater detail. Further study is needed to determine if intervention with allopurinol or other methods to lower SUA ameliorates these risks.
New Therapies
Acute Gout
The recent interest in the role of the NALP (NACHT, LRR and pyrin domain-containing protein) inflammasome, which generates interleuking (IL)-1? has suggested that this cytokine may be a target for therapy for inflammation associated with gout. An uncontrolled trial of IL-1 inhibition with anakinra was effective in the treatment of acute gout in 10 patients and may also have a role in treatment-resistant inflammation associated with tophaceous gout.[31,32] Another IL-1 inhibitory agent, rilonacept, has also been shown to be effective at suppressing flares of gouty arthritis and C-reactive protein (CRP) levels.[33] There are insufficient data to recommend the routine use of these expensive IL-1 inhibitory systemic therapies, although they may be considered in some refractory cases.
Chronic Hyperuricemia
Febuxostat, an oral non-purine selective inhibitor of xanthine oxidase, has recently been approved in the United States for the treatment of hyperuricemia in gout. The pharmacodynamics of febuxostat is not altered by moderate renal impairment and it has appeared safe in initial studies in those with mild to moderate renal impairment.[34]
Febuxostat, at a dose of 80–240 mg, was superior to standard dose allopurinol in reaching a SUA goal of < 6 mg/dl with similar rates of adverse events in patients with serum Cr < 2 mg/dl. This medication is approved at dosing of 40–80 mg, however, a greater percentage of patients were able to meet their goal SUA with doses as high as 240 mg without increased rates of adverse events.[34] The safety of febuxostat has not yet been assessed in those with severe renal impairment. With this caution, this medication is likely to be a useful adjunct to current therapies, especially in those who are unable to tolerate allopurinol.
Other agents that may lower the SUA level are under investigation. Pegylated recombinant mammalian uricase (PEG-uricase) has been shown to be effective in a phase two trials at lowering SUA and preventing subsequent flares of gout.[35–37] This advance may play a role in the management of some patients with difficult to manage tophaceous gout.
Losartan, fenofibrate, statins, vitamin C and increased coffee intake have all been shown to modestly increase urine uric acid excretion.[38–41] There are currently no data in regard to the clinical utility of these interventions alone in the treatment of hyperuricemia and gout. However, consideration of hyperuricemia when choosing between blood pressure and cholesterol treatments may be appropriate and helpful in many patients. Similarly, increasing coffee (> 4 cups daily) and vitamin C consumption in the diet might be recommended for those patients with the modest hyperuricemia who are resistant to medication, however, the benefits of these interventions are likely to be small compared with other pharmacological interventions.
Other Considerations
Cardiovascular Risk
Acute flares of gouty arthritis are associated with increases in inflammatory markers such as CRP.[42] Elevated CRP is an independent risk factor for cardiovascular disease.[43] A history of gouty arthritis appears also to be an independent risk factor for acute myocardial infarction, perhaps through this increase in systemic inflammation.[44] Large population based studies have shown that a diagnosis of gout is associated with increased cardiovascular and overall mortality independent of other risk factors.[45–47]
As previously mentioned, hyperuricemia is associated with a number of cardiovascular risk factors including obesity, hypertension and dyslipidemia. Studies suggest that uric acid has harmful cardiovascular effects independent of these associations.[48] SUA levels are associated with carotid atherosclerosis independent of hypertension and other risk factors.[49] Gout and hyperuricemia are independent risk factors for the development of acute myocardial infarction, stroke and peripheral arterial disease.[44,50,51] Emerging data also suggest that hyperuricemia is an independent predictor of cardiovascular morbidity and mortality.[48,51]
The effect of hyperuricemia on cardiovascular outcomes is likely to be the modest when compared with other risk factors. There have been no studies to suggest a benefit of uric acid-lowering therapy on cardiovascular outcomes in either asymptomatic hyperuricemia or hyperuricemic patients with gout.
Endothelial Dysfunction and Hypertension
Serum uric acid levels have been associated with endothelial dysfunction.[52] Induction of elevated uric acid levels in rats with a uricase inhibitor has been shown to decrease nitric oxide production by endothelial cells and increase blood pressure; a finding that was reversible with allopurinol.[53] Soluble uric acid activates the renin-angiotensin system and has been shown to have proinflammatory and proliferative effects on vascular smooth muscle cells.[54,55]
The association between hyperuricemia and hypertension is well known. Large epidemiological studies, including a subset of the Framingham Heart Study, have also revealed that hyperuricemia predicts incident hypertension.[54] Several small studies have demonstrated some improvement in blood pressure in patients treated with allopurinol.[56,57] It is not yet clear if hypertensive patients with gout will receive blood pressure reduction upon initiation of uric acid-lowering therapy.
Progression of Kidney Disease
Hyperuricemia has also been shown to predict the development of chronic kidney disease in a number of studies.[58] It is unclear from these studies if an elevated SUA has a causal role in the incidence and progression of renal disease or if it is simply a sensitive marker of nephron loss. In patients with stage 3–4 chronic kidney disease, hyperuricemia is also an independent risk factor for all-cause mortality.[59]
One uncontrolled study has suggested that withdrawal of chronic allopurinol therapy may result in worsening hypertension and accelerated loss of renal function.[60] A controlled clinical trial of allopurinol in 54 patients with hyperuricemia and mild-moderate chronic kidney disease resulted in decreased progression of disease at 12 months of therapy.[57] This evidence would support aggressive, goal-oriented treatment of hyperuricemia in patients with renal disease and gout.
Conclusions
Gout is a common, burdensome and often challenging disease. Clinical diagnosis although easy in classical attacks can be challenging in some patients. Anti-inflammatory agents are critical for treatment of acute flares and for prophylaxis when initiating urate-lowering therapy. The importance of appropriate and aggressive urate-lowering phamacotherapy is often under-recognised. This should be undertaken in a goal-oriented approach to reach a SUA level of < 6.0 mg/dl. Patients with gout and hyperuricemia should be considered at increased risk for hypertension, cardiovascular disease and kidney disease.
Conclusions
Gout is a common, burdensome and often challenging disease. Clinical diagnosis although easy in classical attacks can be challenging in some patients. Anti-inflammatory agents are critical for treatment of acute flares and for prophylaxis when initiating urate-lowering therapy. The importance of appropriate and aggressive urate-lowering phamacotherapy is often under-recognised. This should be undertaken in a goal-oriented approach to reach a SUA level of < 6.0 mg/dl. Patients with gout and hyperuricemia should be considered at increased risk for hypertension, cardiovascular disease and kidney disease.
Top
From Rheumatology
Crystal Ball Gazing: New Therapeutic Targets for Hyperuricaemia and Gout
N. Dalbeth; T. Merriman
Posted: 09/10/2009; Rheumatology. 2009;48(3):222-226. © 2009 Oxford University Press
Abstract and Introduction
Abstract
Recent studies in diverse disciplines have led to significant advances in the understanding of the basic biology of hyperuricaemia and gout, with important implications for future treatment. These findings include genetic variation within SLC2A9 as a key regulator of urate homeostasis, and identification of urate—anion exchanger urate transporter 1 (URAT1) and other renal uric acid transporters. Recognition of urate as an endogenous danger signal and activator of the adaptive immune response suggests an important role for urate crystals in non-microbial immune surveillance. The central role of NALP3 inflammasome activation and IL-1beta signalling in the initiation of the acute gout attack raises the possibility of new therapeutic targets. Disordered osteoclastogenesis in patients with chronic gout highlights potential therapies for prevention of joint damage. This review summarizes these findings and the potential relevance for future management of gout.
Introduction
Gout is a common inflammatory disease of metabolic origin. This disorder is characterized by intermittent attacks of severe joint inflammation, and in the presence of persistent hyperuricaemia, development of tophaceous disease and chronic gouty arthropathy. The central role of hyperuricaemia and MSU crystal deposition in the pathogenesis of gouty inflammation has been recognized for decades (reviewed in[1]), and new urate-lowering drugs such as febuxostat and PEG-uricase should lead to major improvements in long-term management of gout.[2,3] However, many questions remain about the basic mechanisms of this disease. These questions include: why certain individuals are predisposed to hyperuricaemia; why certain hyperuricaemic individuals are predisposed to gout; how uric acid is handled by the renal tubule; what molecular pathways are involved in initiation of the acute gout attack; and what factors mediate joint damage in chronic gout. Recent laboratory research has shed light on a number of the issues. This review summarizes this research and focuses on the implications for future treatment of hyperuricaemia and gout.
Genetic Polymorphisms of SLC2A9 as a Key Regulator of Urate Homeostasis
The solute transporter 2A9 was first identified as a member of the SLC2A gene family of hexose transporters,[4] with its major physiological role assumed to be in the transport of glucose and fructose. More recently, genome-wide association scanning and follow-up studies have demonstrated a role in Caucasian populations for genetic variation within SLC2A9 in the control of serum urate levels[5-9] and susceptibility to gout.[5,6,8,10] These novel findings have revealed SLC2A9 to be a transporter of uric acid that can be inhibited by a uricosuric agent.[8] The inheritance of one predisposing variant of SLC2A9 increases the risk for an individual to develop gout by 30-70% [odds ratio (OR) = 1.3-1.7] ([5,6,8,10]; the inter-study variability is likely to be due to differing criteria for ascertainment of gout).
While the precise molecular mechanism by which SLC2A9 increases the risk of hyperuricaemia and gout is not yet understood, it is likely that this variant increases expression of the shorter isoform 2 of SLC2A9,[6] encoding a protein with a shorter N-terminal region (GLUT9?N, resulting from transcriptional editing of exons 1a and 2, using an alternative translational initiation codon in exon 1b).[4] Expression of this variant is detectable only in kidney and placenta in humans. SLC2A9v2 localizes exclusively to the apical membrane of the renal proximal tubule[4] (Fig. 1). Thus, given that it has been shown to transport uric acid,[8] SLC2A9v2 appears to function, as does the urate-anion exchanger urate transporter 1 (URAT1),[11] in the reabsorption of uric acid.[12] Reabsorbed uric acid exits the kidney tubular cell into the serum via the full-length variant (SLC2A9v1) situated in the basolateral membrane.[13] Demonstration that genetic variation within SLC2A9 influences serum urate emphasizes SLC2A9 as a checkpoint in control of serum urate levels. SLC2A9 can be regarded as a potential therapeutic target and warrants concerted pharmaceutical research in order to develop superior uricosuric agents.
Click to zoom
(Enlarge Image)
Figure 1.
The uric acid transportasome. Current understanding of uric acid transport in the proximal renal tubule. Monocarboxylates accumulate in the tubular cell through sodium-dependent monocarboxylate transporters SLC5A8 and SLC5A12, and dicarboxylates through SLC13A3. Uric acid enters the cell in exchange for monocarboxylate via apical URAT1 and for dicarboxylate via apical OAT4. Apical SLC2A9v2 plays a significant role in uric acid reabsorption, the reabsorbed uric acid exiting the cell through basolateral SLC2A9v1. For efflux of uric acid into the lumen, MRP4 and a voltage-driven organic anion transporter (vOAT1/NPT1) are candidates. OAT1 and OAT3 are known to transport uric acid, although the direction of transport is not clear.
Ingestion of fructose-sweetened, but not artificially sweetened, soft drinks is associated with increased risk of hyperuricaemia and gout (OR = 1.8 for hyperuricaemia at ?4 servings/day and OR = 1.9 for gout at ?2 servings/day).[14,15] Fructose is the only sugar known to increase serum urate levels.[16] The observation that SLC2A9, genetically associated with hyperuricaemia and gout, transports both fructose and uric acid (with maximal transport of fructose occurring in the absence of uric acid), suggests a possible gene/environment interaction in development of hyperuricaemia and gout. Inclusion of data on fructose ingestion as a covariate in genetic studies of SLC2A9 in hyperuricaemia and gout may be illuminating.
Other Candidate Genes
The genome-wide association scans clearly demonstrated SLC2A9 to have the major single effect on serum urate levels in Caucasian populations.[6-9] No other loci reached the genome-wide level of significance. However, SLC2A9 explains <5% of variance in serum urate levels, indicating that a number of other factors controlling serum urate levels—environmental, epigenetic and genetic—remain to be discovered. It is reasonable to expect that meta-analysis of the genome-wide scan data will reduce the background 'noise' in the association data and enable identification of other genes that control serum urate levels, which can be regarded as validated therapeutic targets. Common genetic variations in URAT1, the ?3 adrenergic receptor and methylene tetrahydrofolate reductase genes have also been implicated in regulation of serum urate levels in more than one population[17-25] (Table 1). The ?3 adrenergic receptor data are particularly intriguing, suggesting a genetic link between serum urate levels and insulin resistance, a frequent comorbid feature of gout.[26,27] Study of these genes in other populations, including Caucasian, is warranted.
Most studies to date have identified genetic associations with serum urate and hyperuricaemia. At present it is not known as to what factors determine the development of gout in individuals with hyperuricaemia, noting that the majority of those with hyperuricaemia do not develop gout.[28] Answering this question should enable novel therapeutic opportunities of this disease. Genome-wide association scanning in gout may provide critical insights into this issue. This will require significant international effort in recruiting thousands of cases. Accurate phenotyping will be essential to reduce clinical heterogeneity, ideally with gout proven by the gold standard method of microscopic MSU crystal diagnosis.
The Renal Uric Acid Transportasome
We envisage that pharmacogenomics will be an important part of decision-making in future clinical care, perhaps in optimizing treatment using existing therapies. One prominent example is the decision to use the xanthine oxidase inhibitor allopurinol vs uricosuric agents. At present, the latter tend to be used when allopurinol is not tolerated or has proven ineffective, often the case in clinical practice.[2,29] The major regulator of serum urate is renal excretion, with insufficiency in this process a feature of gout.[30-32] The use of uricosuric agents as initial therapy in gout might be justified in patients who could be demonstrated, using a simple test, to be insufficient in renal uric acid excretion. Genetic testing at individual uric acid transport genes (such as SLC2A9) would lack the specificity required for clinical decision-making. However, a uric acid 'transportasome' genetic test (in combination with standard urine testing for uric acid excretion) may give the necessary specificity and sensitivity. The renal uric acid excretion capability of an individual is a net effect of the uric acid secretion and reabsorption activities of the various renal uric acid transporters.[12] This renal exchange is mediated by specialized molecules expressed in renal proximal tubule cells (Fig. 1; reviewed in[33]). Identified molecules include SLC2A9 (see above), URAT1, organic anion transporters 1, 3, and 4 (OAT1, OAT3, OAT4), multi-drug resistance protein 4 (MRP4) and sodium-coupled monocarboxyl transporters SMCT1,2 (SLC5A8, SLC5A12). In addition to SLC2A9, genetic variation in URAT1 has been demonstrated to influence serum urate levels in Caucasian and Japanese cohorts[17,18] (Table 1). The influence of genetic variation in other uric acid transportasome molecules has not been adequately tested. The fact that association at other transportasome genes was not reported in the genome-wide association scans for genetic variants controlling serum urate levels does not rule out a role for variation in such genes in control of serum urate. Furthermore, current genome-wide genotyping SNP microarrays do not have adequate coverage of some genes, including URAT1.[6] It is not unreasonable to hypothesize that the combined influence of genetic variation in the uric acid transportasome (excluding SLC2A9) could exceed the influence of SLC2A9 on serum urate levels. What is required is exhaustive genetic association experiments of the influence of genetic variation in transportasome genes on serum urate levels and gout, with the ultimate goal being development of a transportasome genetic risk algorithm to inform decision-making on the optimal therapy for a newly diagnosed patient.
MSU Crystals as an Endogenous Danger Signal
The immune system identifies infections through cellular activation by microbial adjuvants (reviewed in[34]); immune responses to non-microbial stimuli (such as tumours or transplanted cells) also require adjuvants to generate an immune response, and dying cells are thought to be important activators of this response through release of endogenous adjuvants (or 'danger signals').[35] Until recently, the identity of these danger signals has not been known. In 2003, Shi et al. [36] reported that most of the endogenous activity involved in priming CD8 T-cell responses to dying cells is due to MSU crystals. This study demonstrated that dying cells have super-saturating concentrations of urate (likely due to liberation of purines through degradation of DNA and RNA), and that in vivo elimination of urate using allopurinol and uricase inhibits the adjuvant activity, and reduces priming of cytotoxic T lymphocytes (CTLs) by about 90%. This effect was found to occur through stimulation of dendritic cells to increase expression of costimulatory molecules, such as CD86 and CD80. Thus, formation of MSU crystals plays a key role in immune surveillance and generation of adaptive immunity to non-microbial stimuli.
Further work has shown that in mouse tumour models, MSU crystals induce an IL-5 Th2 immune response, suggesting that MSU crystals can also enhance humoral immunity.[37] These observations may be of relevance to the generation of therapeutic immune responses. Adjuvants are frequently used in human vaccines to activate the immune system to foreign antigen. Alum is the most frequently used adjuvant in human vaccines, and predominantly induces humoral immunity, possibly through activation of Th2 cells. A recent study has shown that high concentrations of urate are present after injection of alum into the peritoneal cavities of mice.[38] Intra-peritoneal injection of alum was associated with an intense inflammatory response that was entirely abrogated by treatment with uricase, and was not observed in MyD88-deficient mice (see below). These findings suggest that alum acts as an adjuvant by inducing formation of MSU crystals, which in turn promotes differentiation and activation of inflammatory dendritic cells.
The potential clinical relevance of these findings has been explored further in animal models of non-microbial immunity. In a mouse tumour model, elevated urate levels are found in tumours undergoing immune rejection.[39] Subcutaneous injection of MSU crystals close to the site of the tumour enhances the tumour immune response, and both allopurinol and uricase delay tumour immune rejection.[39] In a tumour protection model, addition of MSU crystals to dying tumour cells suppresses subsequent tumour growth in a dose-dependent manner.[37] In a mouse model of transplant immunity, elimination of urate using a combination of allopurinol and uricase reduces generation of CTLs to an antigen in transplanted syngeneic cells by ~80%.[40] Urate depletion also inhibits proliferation of auto-reactive T cells in a transgenic mouse diabetes model. This effect is related to reduced activation of endogenous antigen-presenting cells.[40]
Together, these data provide strong and consistent evidence for the role of MSU crystals in animal models of non-microbial immune surveillance. These data suggest a potential biological advantage to hyperuricaemia and formation of MSU crystals in immune surveillance and generation of adaptive immunity. However, it should be noted that serum urate levels are low in mice and most other non-primate mammals due to persistence of uricase. Studies of human immune responses are needed to clarify whether similar mechanisms of immune activation are relevant in human disease.
If these effects are confirmed in human subjects, there are several implications for treatment of human disease. First, these observations suggest that gout therapies that suppress the serum urate to very low levels for prolonged periods may be associated with increased risk of immunosuppression, and reduced immune activation in response to non-microbial danger signals and vaccines. At present, there is no documented evidence to support this hypothesis, but long-term safety monitoring of newly developed potent urate-lowering therapies will be needed to clarify the optimal serum urate levels in treatment of gout. Second, these data suggest potential new therapeutic indications for potent urate-lowering therapies, particularly in the fields of transplantation and autoimmunity.
The Central Role Of IL-1beta in the Pathogenesis of Acute Gouty Inflammation
It has been known for many years that MSU crystals stimulate monocytes and macrophages to produce IL-1beta .[41] However, the central importance of IL-1beta in the initiation and amplification phases of the acute gout attack has been recently demonstrated. The NALP3 inflammasome (cryopyrin) is a complex of intracellular proteins that is activated on exposure to microbial elements, such as bacterial RNA and toxins,[42,43] and is required for adequate responses to adjuvants such as alum.[44] Activation of this protein complex leads to release of caspase-1, which is required for cleavage of pro-IL-1beta to the active form of IL-1beta . A recent report has demonstrated that the NALP3 inflammasome is essential for acute gouty inflammation.[45] MSU crystals activate caspase-1 and lead to release of IL-1beta in human monocytes. These effects are reduced in macrophages from mice lacking components of the NALP3 inflammasome. Neutrophil influx following injection of crystals into the peritoneal cavity is impaired in mice lacking components of the NALP3 inflammasome, and also in mice deficient in the IL-1 receptor. Interestingly, colchicine, a drug that is frequently used to prevent and treat acute gout attacks, blocks IL-1beta maturation by MSU crystals in vitro, suggesting that the therapeutic effect of this agent may be, at least in part, due to inhibition of NALP3 inflammasome activation.
The essential role of IL-1beta has been further emphasized by work showing that MyD88, an intracellular adaptor protein involved in IL-1 receptor (IL-1R) signalling, is required for the inflammatory response to MSU crystals.[46,47] Mice deficient in MyD88 or the MyD88-dependent IL-1 receptor show reduced neutrophil influx in response to MSU crystal injection, and blockade of IL-1 by neutralizing antibodies also attenuates the inflammatory response in the urate peritonitis model.[47,48] The relevance of these observations has been confirmed by a proof-of-concept open study of 10 patients with acute gout, which demonstrated rapid response to IL-1 inhibition by anakinra.[48]
These studies provide new insights into the pathways that lead to acute gouty inflammation, showing that MSU crystals activate highly conserved pathways of innate immunity to induce the acute inflammatory response in gout. Together, these reports point to a new paradigm of disease pathogenesis with IL-1beta central in the initiation and amplification of the acute gout attack (Fig. 2). Agents targeting elements of the NALP3 inflammasome, IL-1beta or the IL-1R-MyD88 complex may provide more directed therapies for prevention and treatment of acute gout without the severe side effects that are frequently associated with currently used treatments, such as high-dose colchicine and NSAIDs.
Click to zoom
(Enlarge Image)
Figure 2.
A new paradigm of acute gouty inflammation. Modification of the model previously proposed,[1] incorporating the central role of the NALP3 inflammasome, IL-1R-MyD88 and IL-1beta .
A new paradigm of acute gouty inflammation.
Conclusion
Despite the well-recognized role of hyperuricaemia and MSU crystal formation in the pathogenesis of gout, many patients with this disease continue to experience recurrent flares, chronic pain and disability.[53-55] The major developments in the understanding of the basic science of all stages of this disease, from hyperuricaemia to recurrent acute gout attacks to chronic erosive disease, should lead to development of novel therapeutic approaches to diagnosis, treatment and monitoring of this disease.
From Arthritis & Rheumatism Research News Alerts
Drinking 4 or More Cups of Coffee a Day May Help Prevent Gout
Posted: 08/23/2007; Arthritis & Rheumatism. 2007;56(6):2049-2055. © 2007 Wiley InterScience
Coffee is a habit for more than 50 percent of Americans, who drink, on average, 2 cups per day. This widely consumed beverage is regularly investigated and debated for its impact on health conditions from breast cancer to heart disease. Among its complex effects on the body, coffee or its components have been linked to lower insulin and uric acid levels on a short-term basis or cross-sectionally. These and other mechanisms suggest that coffee consumption may affect the risk of gout, the most prevalent inflammatory arthritis in adult males.
To examine how coffee consumption might aggravate or protect against this common andexcruciatingly painful condition, researchers at the Arthritis Research Centre of Canada, University of British Columbia in Canada, Brigham and Women's Hospital, Harvard Medical School, and Harvard School of Public Health in Boston conducted a prospective study on 45,869 men over age 40 with no history of gout at baseline. Over 12 years of follow-up, Hyon K. Choi, MD, DrPH, and his associates evaluated the relationship between the intake of coffee and the incidence of gout in this high risk population. Their findings, featured in the June 2007 issue of Arthritis & Rheumatism (http://www.interscience.wiley.com/journal/arthritis), provide compelling evidence that drinking 4 or more cups of coffee a day dramatically reduces the risk of gout for men.
Subjects were drawn from an ongoing study of some 50,000 male health professionals, 91 percent white, who were between 40 and 75 years of age in 1986 when the project was initiated. To assess coffee and total caffeine intake, Dr. Choi and his team used a food-frequency questionnaire, updated every 4 years. Participants chose from 9 frequency responses – ranging from never to 2 to 4 cups per week to 6 or more per day – to record their average consumption of coffee, decaffeinated coffee, tea, and other caffeine-containing comestibles, such as cola and chocolate.
Through another questionnaire, the researchers documented 757 newly diagnosed cases meeting the American College of Rheumatology criteria for gout during the follow-up period. Then, they determined the relative risk of incident gout for long-term coffee drinkers divided into 4 groups – less than 1 cup per day, 1 to 3 cups per day, 4 to 5 cups per day, and 6 or more cups per day – as well as for regular drinkers of decaffeinated coffee, tea, and other caffeinated beverages. They also evaluated the impact of other risk factors for gout – body mass index, history of hypertension, alcohol use, and a diet high in red meat and high-fat dairy foods among them – on the association between coffee consumption and gout among the study participants.
Most significantly, the data revealed that the risk for developing gout decreased with increasing coffee consumption. The risk of gout was 40 percent lower for men who drank 4 to 5 cups a day and 59 percent lower for men who drank 6 or more cups a day than for men who never drank coffee. There was also a modest inverse association with decaffeinated coffee consumption. These findings were independent of all other risk factors for gout. Tea drinking and total caffeine intake were both shown to have no effect on the incidence of gout among the subjects. On the mechanism of these findings, Dr. Choi speculates that components of coffee other than caffeine may be responsible for the beverage's gout-prevention benefits. Among the possibilities, coffee contains the phenol chlorogenic acid, a strong antioxidant.
While not prescribing 4 or more cups a day, this study can help individuals make an informed choice regarding coffee consumption. "Our findings are most directly generalizable to men age 40 years and older, the most gout-prevalent population, with no history of gout," Dr. Choi notes. "Given the potential influence of female hormones on the risk of gout in women and an increased role of dietary impact on uric acid levels among patients with existing gout, prospective studies of these populations would be valuable.
Top
From Current Medical Research and Opinion
Uric Acid: Role in Cardiovascular Disease and Effects of Losartan
Michael Alderman; Kala J. V. Aiyer
Posted: 04/13/2004; Curr Med Res Opin. 2004;20(3) © 2004 Librapharm Limited
Summary and Introduction
Summary
A substantial body of epidemiological and experimental evidence suggests that serum uric acid is an important, independent risk factor for cardiovascular and renal disease especially in patients with hypertension, heart failure, or diabetes. Elevated serum uric acid is highly predictive of mortality in patients with heart failure or coronary artery disease and of cardiovascular events in patients with diabetes. Further, patients with hypertension and hyperuricemia have a 3- to 5-fold increased risk of experiencing coronary artery disease or cerebrovascular disease compared with patients with normal uric acid levels. Although the mechanisms by which uric acid may play a pathogenetic role in cardiovascular disease is unclear, hyperuricemia is associated with deleterious effects on endothelial dysfunction, oxidative metabolism, platelet adhesiveness, hemorheology, and aggregation. Xanthine oxidase inhibitors (e.g., allopurinol) or a variety of uricosuric agents (e.g., probenecid, sulfinpyrazone, benzbromarone, and benziodarone) can lower elevated uric acid levels but it is unknown whether these agents reversibly impact cardiovascular outcomes. However, the findings of the recent LIFE study in patients with hypertension and left ventricular hypertrophy suggest the possibility that a treatment-induced decrease in serum uric acid may indeed attenuate cardiovascular risk. LIFE showed that approximately 29% (14% to 107%, p = 0.004) of the treatment benefit of a losartan-based versus atenolol-based therapy on the primary composite endpoint (death, myocardial infarction, or stroke) may be ascribed to differences in achieved serum uric acid levels. Overall, serum uric acid may be a powerful tool to help stratify risk for cardiovascular disease. At the very least, it should be carefully considered when evaluating overall cardiovascular risk.
Introduction
Nearly 120 years have elapsed since serum uric acid was first described as a potential factor in the development of cardiovascular disease.[1] Much, but not all, epidemiological research identifies hyperuricemia as an independent risk factor for the development of cardiovascular and renal disease, particularly in patients with hypertension or congestive heart failure, and in women.[2-9]
Hyperuricemia (usually defined as serum uric acid levels > 6.5 mg/dL in men and > 6.0mg/dL in women) is frequently encountered in hypertensive patients and is often due to a defect in renal urate clearance.[5,10] Commonly encountered in hypertensive patients without overt evidence of renal insufficiency, it may reflect subclinical renal disease. Gout and uric acid kidney stones are traditionally considered to be the major complications of hyperuricemia, but mounting evidence suggests that hyperuricemia is also an independent risk factor for cardiovascular (CV) and renal disease.[2,7,11,12] Elevated serum uric acid levels can be lowered by inhibiting xanthine oxidase (e.g., allopurinol), a key enzyme involved in the terminal stages of uric acid synthesis, or by increasing the rate of excretion of uric acid in renal tubules (e.g., probenecid, sulfinpyrazone, benzbromarone, benziodarone).[13- 16]
Whether a reduction in serum uric acid levels in hypertensive patients impacts the overall risk of CV disease is uncertain. However, recent findings from the LIFE study are consistent with a role for uric acid in the development of CV disease. In this study, up to one third of the CV benefit of losartan-versus atenololbased therapy could be ascribed to differences in effect of these agents on serum uric acid levels.[17]
This paper will review the evidence that uric acid is an independent risk factor for CV disease, summarize the potential pathophysiological mechanisms that link uric acid with hypertension and cardiorenal disease, and explore the potential clinical implications of lowering serum acid levels in the management of hypertensive patients with hyperuricemia.
Articles included in this review were identified using a MEDLINE search for studies published between 1990 and 2003 and included the search terms uric acid, cardiovascular disease, renal disease, hypertension, and angiotensin II antagonists. Articles describing major clinical trials, new data, or new mechanisms were selected for review.
Uric Acid Physiology and Purine Metabolism in Humans
Uric acid is the final breakdown product of dietary or endogenous purines and is generated by xanthine dehydrogenase (xanthine oxidase), primarily in the liver and intestine (Figure 1).[14] Exogenous purines also represent an important source of uric acid, and approximately 50% of RNA purines and 25% of DNA purines are absorbed in the intestine and subsequently excreted in urine. In adult humans, the uric acid pool is approximately 1.2 g and undergoes rapid turnover, with two thirds of the uric acid pool excreted in urine.[18,19] The kidneys handle urate by multiple processes, including glomerular filtration and reabsorption, secretion, and postsecretory absorption in the proximal convoluted tubule.[14, 20]
Figure 1.
Overview of biosynthesis of uric acid
Urate handling by the kidneys can be influenced by multiple factors such as extracellular volume status, urine flow rate, urine pH, urate load, and hormones. In addition, several pharmacologic agents modulate urate excretion, including probenecid, salicylates, sulfinpyrazone, and certain medications used for hypertension management, such as angiotensin II antagonists (AIIAs) like losartan.[18] Serum uric acid levels are influenced acutely by exercise and diet, but persistent hyperuricemia typically occurs due to defective renal urate clearance.[7] The limited solubility of uric acid and the absence of the enzyme uricase (due to a mutation in early hominoid evolution) give rise to a number of clinical conditions including gout and uric acid kidney stones.
Link Between Serum Uric Acid in Cardiovascular and Renal Disease
Epidemiological Evidence
Uric acid has often been considered a part of the dysmetabolic syndrome or simply a marker of other coronary disease risk factors such as hypertension, dyslipidemia, obesity, glucose intolerance, and renal disease.[7, 21-24] However, multiple studies provide strong evidence that elevated uric acid may also bear independent risk factor association with total and/or CV mortality.[2-4,6,8,9,25-27]
Most epidemiological studies have shown a significant association between serum uric acid and CV morbidity and mortality.[2-4,6,8,9,12,25,26] Serum uric acid levels are strongly associated with the occurrence of stroke and MI as well as all-CV events. The link between serum acid and CV disease has been studied in the general population as well as in persons with diabetes mellitus, congestive heart disease, angiographically confirmed coronary artery disease, and hypertension ( Table 1 ).[3]
General Population
NHANES I began more than three decades ago as the first epidemiological study representative of the US population. A multivariate analysis of follow-up data through 1992 on 5926 subjects who had serum uric acid levels at baseline showed that raised serum uric acid in both men and women was associated with significantly higher risk of all-cause, CV disease, and ischemic heart disease mortality (Figure 2).[11] The risk of death due to ischemic heart disease increased by 77% (men) and by 300% (women) when serum uric acid levels were in the highest quartile (> 416 µmol/L or 7 mg/dL) compared with the lowest quartile (< 321 µmol/L or 5.4 mg/dL). After adjustment for age, race, body mass index, smoking status, alcohol consumption, cholesterol level, history of hypertension or diabetes, and diuretic use, each 59.48µmol/L (or 1 mg/dL) increase in serum uric acid level was associated with a hazard ratio for CV disease and ischemic heart disease mortality of 1.09 (95% CI, 1.02-1.18) and 1.17 (95% CI, 1.06-1.28) for men and 1.26 (95% CI, 1.16-1.36) and 1.30 (95% CI, 1.17-1.45) for women, respectively. This analysis, which was based on 1593 deaths, suggests that serum uric acid represents an independent risk factor (not just a marker) for hypertension-associated morbidity and mortality.
Figure 2.
Hazard ratios of all-cause (All), cardiovascular disease (CVD), and ischemic heart disease (IHD) mortality for each 59.48 mmol/L increase in serum uric acid levels in a follow-up study of NHANES. Adapted from Fang and Alderman, JAMA 2000;283:2404-2410[11]
Not all population epidemiology studies support uric acid as an independent risk factor for CV disease. A recent update of the Framingham Heart Study failed to link uric acid with CV disease.[28] Serum uric acid levels were significantly correlated with risk of CV disease in women, but the significance of this was not substantiated after correcting for 11 variables, including hypertension, body mass index, and diuretic use. When Framingham investigators evaluated the risk of CV disease in patients who were more likely to suffer from chronic hyperuricemia (i.e., patients with gout), uric acid levels were found to be a significant and independent risk factor for CV disease in men.[29] In this report, factoring in numerous other risk factors failed to impact the findings. This finding underscores the difficulty of showing uric acid to be an independent risk factor when linkage to other risk factors may be strong.
While the Framingham Heart Study was well designed, it must be borne in mind that the population studied was small and not representative of the US population. Further, the mortality rate in the Framingham Heart Study was approximately half that observed in NHANES I. Collectively, these methodological considerations may, in part, explain the differences in findings between the Framingham and NHANES I studies.
Hypertension
Hyperuricemia is frequently encountered in hypertensive patients and may occur due to a defect in renal urate clearance.[5, 10] Patients with hypertension and hyperuricemia have a 3- to 5-fold increased risk of experiencing coronary artery disease or cerebrovascular disease compared with patients with normal uric acid levels.[5]
A recent study by Franse et al.[30] reported on the longitudinal relationship between serum uric acid levels, diuretic treatment and the risk of CV events in the Systolic Hypertension in the Elderly Trial (SHEP). This randomized clinical trial involved a total of 4327 men and women aged 60 years or older who had isolated systolic hypertension and who were treated for 5 years with the diuretic chlorthalidone or placebo, with the addition of atenolol or reserpine as needed. The study showed that baseline serum uric acid independently predicts CHD but not stroke events in this population, and that the benefit of diuretic intervention for coronary events was limited to those patients whose uric acid levels did not increase. The authors concluded that monitoring serum uric acid levels during diuretic therapy may allow identification of patients remaining at high CHD risk on diuretic therapy, despite blood pressure control.
The Worksite Treatment Program, providing antihypertensive care for 8690 worksite patients, showed that a 1 mg/dL in average in-treatment serum uric acid was associated with a 32% increase in CV events, a value that was comparable to the CV effects of a 46 mg/dL change in total cholesterol or a 10 mmHg change in systolic blood pressure.[31] Interestingly, the study showed that, despite satisfactory blood pressure control, the association of hyperuricemia with CVD continued. This raises the possibility that the deleterious effects of raised serum acid levels and/or its oxidation byproducts may have to be offset by additional therapies to realize the full therapeutic benefit of antihypertensive therapy.
In a 12-year Italian study (the PIUMA Study) involving 1720 previously untreated hypertensive patients, Verdecchia et al.[12] and colleagues also found that serum uric acid was a powerful predictor of CV disease and all-cause mortality. After adjustment for a wide variety of confounding variables, patients with uric acid levels in the highest quartile had an increased risk of CV events (RR = 1.73, 95% CI, 1.01-3.00), fatal CV events (RR = 1.96, 95% CI, 1.02-3.79), and all-cause mortality (RR = 1.63, 95% CI, 1.02-2.57) compared with patients whose uric acid levels were in the second quartile.
Heart Failure
In patients with heart failure, high levels of serum uric acid are highly predictive of mortality and are useful in identifying the need for heart transplantation.[32] As shown in Figure 3, a graded relationship exists between serum uric acid levels and mortality in heart failure patients. Heart failure patients with serum uric acid levels of > 800 µmol/L had a relative risk of mortality that was 18-times higher than that in patients with uric acid levels £ 400 µmol/L. However, the high value is not typically observed in most patients. These findings suggest that serum uric acid levels may provide valuable prognostic information that is superior to other well-established parameters such as clinical status, exercise capacity, and kidney function. As noted in the editorial accompanying this study, these findings not only reveal a potentially new diagnostic test but also provide further evidence of the possible importance of xanthine oxidase/serum uric acid in the pathophysiology of heart failure.[33]
Figure 3.
A recent study in 294 chronic heart failure patients indicates a graded relationship between serum uric acid levels and survival. The plots show Kaplan-Meier survival curves and hazard ratios for different serum acid levels (100 µmol/L = 1.68 mg/dL). *p = 0.016 vs. patients with UA ? 400 µmol/L. ****p < 0.0001 vs. patients with UA ? 400 µmol/L. Adapted from Anker et al., Circulation 2003;107:1991-7[32]
Coronary Heart Disease
Patients with angiographically confirmed coronary artery disease with serum uric acid levels in the upper quartile were five times more likely to die than those in the lowest quartile.[34] A 1 mg/dL increase in serum acid levels was associated with a 26% increase in mortality. This is comparable in magnitude to the 20% to 25% increase in MI associated with a 10- to 12-mm Hg increase in systolic blood pressure.
Diabetes
A study of approximately 8000 patients with Type 2 diabetes showed that stroke incidence significantly increased by quartiles of serum uric acid levels (p < 0.001) and that high serum uric acid levels (> 295µmol/L) were significantly associated with risk of fatal and nonfatal stroke (HR 1.93, 95% CI, 1.30-2.86, p = 0.001).[35] The increased risk was apparent even when other CV risk factors were taken into account.
In conclusion, most of the available epidemiological evidence suggests a strong independent relationship between serum uric acid levels and risk of CV mortality and morbidity. The evidence is particularly powerful and consistent in patients at high CV risk such as those with hypertension, congestive heart failure, and diabetes. While uric acid levels and CV disease are still likely to be linked in the general population, the evidence supporting such an association is less compelling and further studies are needed to delineate this issue further.
Experimental Evidence
Several lines of experimental evidence are consistent with epidemiological evidence suggesting that uric acid is associated with deleterious effects on the vasculature and renal tissues. Marked hyperuricosuria associated with chemotherapy leads to high urinary concentrations of uric acid and the development of an acute renal failure syndrome. This is characterized by intratubular crystal formation, tubular obstruction, interstitial inflammation, and acute renal failure.[36] Similarly, patients with gout have a high incidence of renovascular histological abnormalities including atherosclerosis, arteriosclerosis, glomerulosclerosis, and tubular atrophy.[37, 38]
Hyperuricemia is associated with increased platelet aggregability and activation and this may increase the risk of coronary thrombosis in patients with underlying coronary artery disease.[39,40] Suspensions of washed platelets responded to urate crystals with a rapid active release of serotonin, ATP, and ADP and this was followed by a slower loss of all platelet constituents. These actions might contribute to gouty inflammation or to enhanced atherogenesis. Studies in rats show that elevated uric acid levels following administration of a uricase inhibitor increase blood pressure as well as produce a primary arteriopathy independent of blood pressure.[41,42] These effects, possibly mediated via activation of the renin-angiotensin system and down-regulation of nitric oxide synthase, could be ameliorated by treatment with allopurinol or benziodarone.[42] Exogenous uric acid gives rise to endothelial dysfunction, and endogenous uric acid concentrations correlate with the extent endothelial dysfunction.[43] However, this association does not exclude the possibility that overactive xanthine oxidase, rather than uric acid, may be the underlying cause of vascular injury. Increased uric acid levels may simply reflect an increase in the levels of xanthine oxidase expressed in endothelial cells.
Proposed Mechanisms Linking Uric Acid and Cardiovascular and Renal Disease
Hyperuricemia is frequently encountered in hypertensive patients, with as many as 1 in 4 untreated hypertensives exhibiting an elevated serum uric acid level.[5,10] Hyperuricemia is also present in 40% to 50% of patients receiving diuretics, and in approximately 75% of patients with malignant hypertension or renal insufficiency.[5,13,44]
Serum uric acid may represent a precursor of hypertension[45] or be a reflection of subclinical renal dysfunction, which may cause both the increased serum uric acid level and increased blood pressure. In a nested case-control study, which involved 1031 patients with hypertension and 1031 normotensive controls from the Kaiser Permanente Multiphasic Health Checkup cohort in northern California, serum uric acid levels were significantly associated with the occurrence of hypertension (p = 0.0003) and may represent a marker or intermediate step in the pathophysiologic pathway leading to hypertension.[45]
The cause(s) of hyperuricemia in hypertension is unclear, but several mechanisms have been proposed. First, hypertension may increase serum uric acid via elevated serum lactate levels.[13] Hypertension initially produces renal microvascular disease and local tissue hypoxia, as evidenced by increased levels of serum lactate. Lactate would be expected to decrease the tubular secretion of uric acid, leading to increased serum levels. Intrarenal ischemia can also lead to increased generation of uric acid via xanthine oxidase. It is also possible that metabolic perturbations (e.g., hyperinsulinemia) or sympathetic activity may produce alterations in renal sodium handling, leading to increased arterial pressure, decreased renal blood flow and decreased uric acid secretion (Figure 4). This, in turn, increases purine oxidation, which results in increased reactive oxygen species, subsequent vascular injury, and reduced nitric oxide.
Postulated link between alterations in uric acid metabolism and pathophysiology of hypertension. Adapted from Ward, Lancet 1998;352:670-1[29]
Several possible pathologic mechanisms linking serum uric acid to CV disease have been proposed, including deleterious effects on endothelial function, oxidative metabolism, platelet adhesiveness, hemorheology, and aggregation.[46- 51] Serum uric acid may also contribute to tubulointerstitial disease in the kidneys of hypertensive patients and lead to salt-dependent hypertension.[52] Interestingly, many clinical situations in which increased uric acid levels occur (e.g., obesity, aging, cyclosporine administration, or lead toxicity) are not only associated with hypertension but also with tubulointerstitial disease.[7] For example, long-standing hyperuricemia in patients with gout leads to uric acid crystal deposition in intrarenal tissues, tubulointerstitial injury, and development of hypertension.[7] In addition, hyperuricemia increases the risk for progression of other renal diseases such as IgA nephropathy.[53,54] Urate crystals are proinflammatory, activating complement, stimulating neutrophils to release proteases and oxidants, stimulate macrophages, and activate platelets and the coagulation cascade.[55]
Pharmacologic Interventions That Lower Serum Uric Acid Levels
A number of pharmacological interventions lower serum acid levels, including allopurinol, probenecid, benzbromarone, benziodarone, sulfinpyrazone, salicylate, fibric acid, and losartan.[16, 56] The xanthine oxidase inhibitor allopurinol treats the primary hyperuricemia of gout as well as secondary hyperuricemia associated with antineoplastic therapy or hematological dyscrasias (e.g., polycythemia vera and myeloid metaplasia) by preventing the formation of uric acid.[56] Allopurinol administration produces a 1-3 mg/dL reduction (approximately 30%) in serum uric acid concentrations in healthy young and elderly subjects.[15] Reductions in serum uric acid result from the ability of allopurinol to effectively inhibit xanthine oxidase, the enzyme responsible for the oxidation of hypoxanthine and xanthine in the final stages of uric acid synthesis. Allopurinol is actually a substrate for and a competitive inhibitor of xanthine oxidase and is converted to alloxanthine (oxypurinol).
The classic example of the alternate strategy to treat hyperuricemia is probenecid. This agent promotes uric acid excretion via effects on the organic anion transport exchanger in the proximal tubule and reduces uric acid levels by 30% to 35%.[16] Benzbromarone is also effective in treating hyperuricemia.[57, 58] A summary of results spanning 10 years in 200 patients treated with benzbromarone 75-120 mg/d indicate that serum uric acid levels decrease by an average of 54% and that the severity and incidence of articular manifestations in the patients with gout decreased by 75% over the first year of treatment.[58] Although used only occasionally as a uricosuric agent, sulphinpyrazone (50-800 mg) produced a dose-related increase in uric acid excretion and a corresponding dose-related reduction in the plasma concentration of uric acid in normal volunteers. Administration of 300 mg twice daily for four days reduced plasma uric acid by 64% (from 5.06 to 1.8 mg/dL).[59]
A number of newer agents have been shown to lower serum acid levels. The oral weight-loss agent sibutramine decreases serum uric acid in obese patients by 20% to 25%.[60- 62] Similarly, in patients with Type 2 diabetes and hyperuricemia, the insulin-sensitizing agent troglitazone lowers serum uric acid by 20% to 25%.[63- 65]
The AIIA losartan also produces a uricosuric effect in healthy volunteers, hypertensive patients, and CV transplant patients, typically decreasing serum uric acid levels by 20% to 25%.[66- 71] The parent molecule losartan, not its active E-3174 metabolite, is the active agent blocking uric acid reabsorption. The uricosuric action of losartan is not shared by other antihypertensive agents.[71- 74] ACE inhibitors and CCBs increase uric acid excretion but the effect is modest and does not decrease serum uric acid levels.[66] Diuretics have a propensity to increase serum uric acid levels and may even, rarely, provoke attacks of gout.[75] This has led some authorities to recommend that they should be avoided in patients with gout.[76] Losartan can offset the elevations in serum uric acid levels occurring with hydrocholorothiazide or indapamide.[77-79]
Increased uric acid secretion produced by losartan appears to result from a reduction in the postsecretory reabsorption of uric acid in the proximal tubule of the kidney.[14] Studies using human proximal brush border membrane vesicles indicate that tubular secretion and reabsorption of urate are mediated via a urate/anion exchanger and a urate voltage-sensitive transporter.[80,81] Losartan appears to inhibit the urate/lactate exchanger and urate/chloride exchanger in the proximal convoluted tubule with an affinity greater than that seen with probenecid.[66]
To date, the attenuating effect of losartan on serum and urinary uric acid levels has not been associated with untoward adverse effects, including flank pain or renal stone formation. Because losartan tends to increase urine pH, which increases the solubility of uric acid, the risk of supersaturation seems to be avoided. This is evidenced by study of the effect of losartan on the risk of acute urate nephropathy in 63 hypertensive patients with thiazide-induced asymptomatic hyperuricemia.[82] Adverse events typically associated with acute urate nephropathy (flank pain, hematuria, or increased blood urea nitrogen/creatinine) were not reported. The study also showed that losartan decreased serum uric acid and increased uric acid excretion without increasing urinary dihydrogen urate, the primary risk factor for acute urate nephropathy, during 21 days of dosing in hypertensive patients with thiazide-induced hyperuricemia.
Reductions in Serum Uric Acid and Cardiovascular Outcomes
Several clinical observations linking uric acid to CV outcomes are consistent with the hypothesis that uric acid could play a causal role in CV disease. Some evidence even supports the possibility that interventions to reduce uric acid may affect CV outcomes. Allopurinol has been shown to improve endothelial dysfunction in patients with heart failure[75,83,84] and to reduce CV complications after coronary bypass surgery, including postoperative mortality, arrhythmias, inotrope requirement, and perioperative MI.[85- 88] The hypothesis that serum uric acid levels not only predict mortality in heart failure but may also play a causal role, has prompted initiation of a randomized, double-blind, placebo-controlled clinical trial (OxyPurinol Therapy for CHF or OPT-CHF).[33] This study, begun in 2003, is designed to determine the efficacy and safety of oxypurinol plus standard therapy in NYHA class III-IV patients in the prevention of the combination of heart failure morbidity, exercise capacity, and mortality.
However, the CV benefits of allopurinol may also be related to its antioxidant properties not just its ability to reduce uric acid levels.[89] Beneficial CV effects could thus be mediated via attenuation of xanthine oxidasemediated free radical generation or antioxidant quenching of free-radical activity. Superoxide anions inactivate NO, and therefore allopurinol could theoretically enhance vascular NO activity.[90]
LIFE is the first study to demonstrate that lowering serum acid level is associated with a beneficial effect on CV outcomes in patients with hypertension.[17, 91] Although LIFE was not designed to test the hypothesis that reducing serum uric acid would prevent CV events, the study did show that losartan and atenolol affect serum uric acid levels differently.[91] LIFE involved more than 9000 hypertensive patients with LVH who were treated with either a losartan-or an atenolol-based antihypertensive regimen. Increases in serum uric acid levels were significantly less in patients receiving the losartan-based regimen (328 at baseline to 348 mmol/L at 4 years) than patients receiving the atenolol-based regimen (329 mmol/L at baseline to 376 mmol/L at 4 years). According to Cox regression analysis of LIFE data, up to 29% (95% CI, 14%-107%, p = 0.004) of the CV benefit of a losartan-versus an atenolol-based therapy could be ascribed to differences in effect on serum uric acid levels.[92] Baseline serum uric acid levels were significantly associated with increased CV events (HR 1.024, 95% CI, 1.017-1.032 per 10 µmol/L, p < 0.0001) and, as a time-varying covariate, were strongly associated with the primary composite endpoint (p < 0.0001). The losartan benefit was largely manifested for stroke, which, in epidemiological studies, has been less tightly linked to uric levels than myocardial infarction. The unique uric-acid lowering effects of losartan may contribute to the superior reduction in CV mortality and morbidity beyond that seen with atenolol.
The rise in serum uric acid in both losartan and atenolol groups may have been due to the diuretic HCTZ, which was equally used as an additional antihypertensive agent in both groups, or the effect of a gradual decline in renal function over the course of the 4-year study. These effects were counteracted by the uricosuric action of losartan. This assertion is supported by evidence that losartan has alleviated the rise in serum uric acid levels associated with the diuretics indapamide and HCTZ.[77,93]
Summary and Clinical Implications
Reductions in Serum Uric Acid and Cardiovascular Outcomes
Several clinical observations linking uric acid to CV outcomes are consistent with the hypothesis that uric acid could play a causal role in CV disease. Some evidence even supports the possibility that interventions to reduce uric acid may affect CV outcomes. Allopurinol has been shown to improve endothelial dysfunction in patients with heart failure[75,83,84] and to reduce CV complications after coronary bypass surgery, including postoperative mortality, arrhythmias, inotrope requirement, and perioperative MI.[85- 88] The hypothesis that serum uric acid levels not only predict mortality in heart failure but may also play a causal role, has prompted initiation of a randomized, double-blind, placebo-controlled clinical trial (OxyPurinol Therapy for CHF or OPT-CHF).[33] This study, begun in 2003, is designed to determine the efficacy and safety of oxypurinol plus standard therapy in NYHA class III-IV patients in the prevention of the combination of heart failure morbidity, exercise capacity, and mortality.
However, the CV benefits of allopurinol may also be related to its antioxidant properties not just its ability to reduce uric acid levels.[89] Beneficial CV effects could thus be mediated via attenuation of xanthine oxidasemediated free radical generation or antioxidant quenching of free-radical activity. Superoxide anions inactivate NO, and therefore allopurinol could theoretically enhance vascular NO activity.[90]
LIFE is the first study to demonstrate that lowering serum acid level is associated with a beneficial effect on CV outcomes in patients with hypertension.[17, 91] Although LIFE was not designed to test the hypothesis that reducing serum uric acid would prevent CV events, the study did show that losartan and atenolol affect serum uric acid levels differently.[91] LIFE involved more than 9000 hypertensive patients with LVH who were treated with either a losartan-or an atenolol-based antihypertensive regimen. Increases in serum uric acid levels were significantly less in patients receiving the losartan-based regimen (328 at baseline to 348 mmol/L at 4 years) than patients receiving the atenolol-based regimen (329 mmol/L at baseline to 376 mmol/L at 4 years). According to Cox regression analysis of LIFE data, up to 29% (95% CI, 14%-107%, p = 0.004) of the CV benefit of a losartan-versus an atenolol-based therapy could be ascribed to differences in effect on serum uric acid levels.[92] Baseline serum uric acid levels were significantly associated with increased CV events (HR 1.024, 95% CI, 1.017-1.032 per 10 µmol/L, p < 0.0001) and, as a time-varying covariate, were strongly associated with the primary composite endpoint (p < 0.0001). The losartan benefit was largely manifested for stroke, which, in epidemiological studies, has been less tightly linked to uric levels than myocardial infarction. The unique uric-acid lowering effects of losartan may contribute to the superior reduction in CV mortality and morbidity beyond that seen with atenolol.
The rise in serum uric acid in both losartan and atenolol groups may have been due to the diuretic HCTZ, which was equally used as an additional antihypertensive agent in both groups, or the effect of a gradual decline in renal function over the course of the 4-year study. These effects were counteracted by the uricosuric action of losartan. This assertion is supported by evidence that losartan has alleviated the rise in serum uric acid levels associated with the diuretics indapamide and HCTZ.[77,93]
Summary and Clinical Implications
Substantial evidence supports the contention that serum uric acid is an important, independent risk factor for CV disease, especially in patients with hypertension, heart failure, or diabetes. This is particularly true for women and non-Caucasian subjects. The link between serum uric acid and CV risk is statistically significant, timely, independent of other confounding variables, specific, substantial in size, and dose related.
Elevated serum uric acid in hypertensive patients has been associated with a 3- to 5-fold increased risk of coronary artery disease or cerebrovascular disease compared with patients with normal uric acid levels. Collectively, these studies suggest that serum uric acid may be a powerful tool to help identify patients at high risk of CVD. Serum uric acid should therefore be considered along with other risk factors, such as obesity, hyperlipidemia, and hyperglycemia, in the assessment of overall CV risk.
The remaining key questions are whether uric acid has a causal relation to CV disease, whether a reduction would prevent CV and renal disease, and whether uric acid can be reduced to an optimal level whereby it no longer imposes an increased risk for CV disease. The LIFE study findings are encouraging. However, these issues can only be settled definitively through randomized clinical trials. Until then, the belief that treatment to reduce hyperuricemia will be cardioprotective must rest on observational and mechanistic evidence.
Top
From Southern Medical Journal
What Should We Eat? Evidence from Observational Studies
Stephen M. Adams, MD; John B. Standridge, MD
Posted: 09/07/2006; South Med J. 2006;99(7):744-748. © 2006 Lippincott Williams & Wilkins
Abstract and Introduction
Abstract
Observational studies provide a wealth of important correlations between diet and disease. There is a clear pattern of dietary habits that is associated with reduced rates of a multitude of common illnesses, including heart attack, cancer, stroke, diabetes, and hypertension. In some cases, interventional studies have proven the benefits of dietary change; in others, there is insufficient evidence to prove causation. Based on the existing evidence, the optimal diet should emphasize fruits and vegetables, nuts, unsaturated oils, whole grains, and fish, while minimizing saturated fats (especially trans fats), sodium, and red meats. Its overall calorie content should be low enough to maintain a healthy weight.
Introduction
There is extensive epidemiologic evidence linking dietary content with a wide variety of illness. Large differences in the rates of heart disease, stroke, diabetes, and cancer have all been linked to food choices. Unfortunately, most of the data is observational, and in many cases there is little direct evidence of benefit from dietary interventions because of the lack of sufficiently powered clinical trials. Dietary intervention trials that measure surrogate endpoints are much more common than those that show direct changes in morbidity and mortality. Nevertheless, it is possible to make evidence-based recommendations regarding diet. The following paragraphs highlight evidence of some of the links between disease and dietary habits derived from observational studies.
Diet and Heart Disease
The relationship between diet and coronary artery disease is probably the most frequent dietary topic that physicians face. There are a number of common misconceptions that find their way into the advice physicians give their patients. The most striking example is the universal recommendation to cut back on fat, without making distinctions between types of fat. In contrast, epidemiologic studies show that total fat intake does not correlate with coronary artery disease risk, and those that consume more unhydrogenated polyunsaturated and monounsaturated fats such as olive oil have lower rates of heart disease.[1] Thus it seems evident that the type of fat is much more important than the amount of fat consumed.[2] The Nurse's Health Study showed no correlation between total fat consumption and coronary disease in women, but considerable differences between types of fat. Based on this study, it was estimated that replacement of 5% of the energy derived from saturated fats with unsaturated fats would reduce the risk of coronary disease by 42%. Replacing a mere 2% of calories derived from trans fat with unsaturated, unhydrogenated fats results in an estimated risk reduction of 53%.[3] In contrast, the Health Professional Follow-up Study failed to find statistically significant relationships between saturated fat consumption and myocardial infarction in men. The same study showed a strongly positive association between linoleic acid (an omega-3 fatty acid found in various plant oils) and decreased heart disease (relative risk 0.41 for 1% increase in energy consumption) after controlling for nondietary risk factors and adjusting for total fat intake.[4] The Lyon Diet Heart Study, which compared a low-fat diet to a Mediterranean diet that emphasized fruits, vegetables, cereals, and unsaturated fats lends further credence to this idea. The adjusted risk ratio for primary and secondary endpoints was 0.53 after 4 years, and the all-cause mortality rate was 56% lower in the Mediterranean diet group.[5] It must be noted, however, that there have been criticisms of the study's methodology.[6] The Indian Heart Study tested a diet that emphasized fruits, vegetables, nuts, and grains versus standard advice for a low-fat diet. The intervention group had 40% fewer cardiac events and lower mortality after one year.[7] The Nurse's Health Study found a coronary heart disease relative risk of 0.76 for women who ate a diet rich in fruits, vegetables, whole grains, legumes, poultry, and fish and low in refined grains, potatoes, and red or processed meats. The risk ratio comparing the 20% of subjects with the best diet against the 20% with the worst diet was 0.64.[8]
Fish consumption has been shown to be associated with a lower risk of heart disease in multiple studies. A recent meta-analysis showed an overall 20% risk reduction for fatal myocardial infarction in those who consume fish versus those who consume little or no fish.[9] Patients who eat tuna or baked or broiled fish have up to a 32% lower rate of congestive heart failure. Eating fried fish does not appear to be protective.[10] Since 2000, the American Heart Association's dietary guidelines have recommended that healthy adults eat at least two servings of fish per week, particularly fatty fish such as mackerel, lake trout, herring, sardines, albacore tuna and salmon.[11] The beneficial health effects of fish consumption are widely believed to be due to the omega-3 fats they contain. A Cochrane Database analysis could not find proof of a link between omega-3 intake and improved cardiovascular health,[12] but a more recent meta-analysis of omega-3 studies found evidence of decreased mortality, nonfatal myocardial infarction, and sudden cardiac death.[13] The largest omega-3 trial (GISSI) enrolled over 11,000 patients and showed a statistically significant decrease in mortality within 3 months of therapy, with a relative risk of 0.59; the relative risk of sudden death at 4 months was 0.47.[14]
A diet high in whole grain has been linked with a lower risk of heart disease. In the Nurses' Health Study, after 10 years of follow-up, the 20% of patients who ate the most whole grain products had a relative risk of coronary heart disease of 0.67 compared with 20% with the lowest intake.[15] The Iowa Women's Health Study had nearly identical results, with a relative risk of 0.7 between the highest and lowest quintiles of whole grain consumption.[16] The Health Professional Follow-up Study showed after 14 years similar evidence of benefit with a relative risk of 0.82 between the extreme quintiles of whole grain intake. Those in the highest quintile of added bran intake had a relative risk of coronary heart disease of 0.70, relative to those with no intake of added bran. Added germ showed no benefit.[17]
The consumption of fruits and vegetables is associated with a lower risk of heart disease. A 13-year study of men in Finland demonstrated that after adjustment for other risk factors, there was a relative risk of cardiovascular-related death of 0.59 for the highest quintile of fruit, berry, and vegetable intake compared with the lowest quintile.[18] Analysis of the results of the Nurses' Health Study and the Health Professionals Follow-up Study showed that consumption of fruits and vegetables, particularly vitamin C-rich fruits and vegetables, appears to have a protective effect against coronary heart disease.[19]
Eating nuts is associated with reduced heart disease. In the Adventist Health Study, those who ate nuts 1 to 4 times per week had 22% fewer myocardial infarctions, and those who ate nuts 5 or more times per week had 51% fewer infarctions. The effect seemed to be independent of vegetarian or nonvegetarian status.[20] Vegetarians have a dramatically lower risk of heart disease. Analysis of observational studies show for men an age-standardized rate ratio of first event fatal coronary heart disease of 0.59. For women the ratio is 0.49. The ratios for first event myocardial infarction are 0.60 and 0.46 for men and women, respectively.[21]
Diet and Stroke
Observational studies have shown links between diet and risk of stroke that are similar to those seen with heart disease. The Nurses' Health Study showed that after risk factor adjustment, women who ate fish once a week had a relative risk of stroke of 0.78 relative to those who ate fish less than once a month. The relative risk for those who ate fish 5 or more times per week was 0.48.[22] The Health Professional Follow-up Study reported similar findings in males, with a 0.57 relative risk for those who ate fish 1 to 3 times per month compared with those who ate fish less than once a month.[23] A meta-analysis of 9 independent cohorts showed similar relationships. The authors concluded that fish consumption as infrequently as 1 to 3 times per month may protect against ischemic stroke.[24] The way in which fish is prepared may be important. A study of 4,775 adults 65 years or older showed after 12 years a 27% reduction in ischemic stroke for those who ate tuna or baked/broiled fish one to four times per week. There was a statistically significant increase in stroke for those who ate fried fish and/or fish sandwiches, and a 44% increased risk of hemorrhagic stroke in that same group.[25] The Danish Diet, Cancer, and Health Study followed 54,506 men and women for four years and found an inverse relationship between fruit and vegetable intake and ischemic stroke. The relative risk for the highest quintile of consumption relative to the lowest was 0.60.[26] In a large Japanese study that spanned almost 2 decades, daily fruit intake was associated with a 35% reduction of stroke in men and a 25% reduction in women. Daily intake of green-yellow vegetables was associated with a 26% reduction in stroke death in both genders compared with those who ate one serving or less per week.[27] The Nurses' Health Study showed a strong association between eating whole grains and decreased ischemic stroke risk. The relative risk between the highest and lowest quintiles was 0.49. The ratio was 0.69 after controlling for tobacco and coronary vascular disease risk factors.[28] Further analysis of the same study found an increased relative risk (1.58) of stroke between the highest and lowest quintiles of those consuming a Western pattern diet (red and processed meats, refined grains, sweets, and desserts) versus a prudent diet (fruits, vegetables, legumes, fish, and whole grains).[29] Higher levels of cereal fiber intake is associated with a lower ischemic and hemorrhagic stroke risk in women (relative risks = 0.66 and 0.51, respectively). In contrast, the Health Professional Follow-up Study showed no statistically significant association between stroke and cholesterol, red meat, dairy products, nuts, eggs, or specific types of fats consumed in men.[30] A high intake of refined carbohydrates may increase the risk of hemorrhagic stroke in women, particularly among those who are obese.[31]
Diet and Cancer
There is a real need for additional large studies to clarify the relationship between diet and cancer. Multiple case control studies have shown a relationship between diet quality and cancer risk. In 1997, the World Cancer Research Fund Report stated that based on the data available at the time, the consumption of fruits and vegetables was probably or convincingly associated with a lower risk of cancers of the mouth, esophagus, lung, stomach, large intestine, larynx, pancreas, breast, and bladder. Subsequently, analysis of the Harvard Nurses Health and Health Professionals Follow-up Studies found no association between fruit and vegetable consumption and cancer risk. The study populations as a whole consumed much more fruits and vegetables than the US average. It has been suggested that the failure to show benefit could be due to a plateau effect in the relationship between diet and disease.[32] A more recent report did show statistically significant associations between recommended food score (RFS: a measure of overall diet quality) and cancer. In a cohort of 42,254 American women with a median follow-up of 9.5 years, RFS was inversely related to cancer risk and death; the relative risk ratio was 0.8 for total mortality, 0.74 for cancer mortality, 0.75 for breast cancer, 0.49 for colon/rectal cancer, and 0.62 for lung cancer.[33] In contrast, a pooled analysis of two prospective studies in Japan, which included 88,658 men and women, showed no association between colorectal cancer and fruit or vegetable intake.[34] Red meat and meat cooked at high temperatures is associated with an increased risk of adenomas of the colon.[35] A large prospective European study (478,040 patients) showed a link between colorectal cancer risk and the intake of red and processed meat (corrected hazard ratio 1.55/100 g increase in consumption) and a decreased risk with increasing levels of fish consumption (corrected hazard ratio 0.46 per 100 g increase in consumption).[36] High levels of red meat consumption are associated with a 68% increase in pancreatic cancer also.[37] There is some evidence that diet may contribute to improved outcomes in patients with cancer. A study of 1,551 women with a history of breast cancer found those with the highest levels of plasma carotenoids (a marker of vegetable intake) had a much lower risk of a new breast cancer event (hazard ratio = 0.57).[38] An Australian study showed improved survival in ovarian cancer with increasing levels of vegetable consumption (hazard ratio = 0.75).[39] An Italian case-control study of ovarian cancer likewise showed increased risk with red meat (odds ratio [OR] = 1.53) and decreased risk with fish (OR = 0.51), raw (OR = 0.47) and cooked (OR = 0.65) vegetables.[40]
Diabetes and Diet
The relationship between obesity and diabetes is one of the strongest risk factor-disease associations. Adults with a body mass index (BMI) over 35 have a 20 times higher risk of type 2 diabetes than those with a normal BMI.[41] Obviously diet and obesity are tightly linked. There are also dietary factors that predict an increased risk of type 2 diabetes independent of BMI. The Nurses' Health Study revealed that a pattern of diet that was high in sugar-sweetened soft drinks, refined grains, diet soft drinks, and processed meat but low in wine, coffee, and cruciferous and yellow vegetables was associated with increased risk. After adjusting for BMI and lifestyle, the odds ratio was 2.56 comparing extreme quintiles in the Nurses' Health Study and 2.93 in the Nurses' Health Study II.[42] After controlling for other risk factors, there also appears to be a modest increased risk of type 2 diabetes mellitus and high levels of red meat consumption (relative risk = 1.28).[43] Higher consumption of saturated fat is associated with higher rates of impaired glucose tolerance and diabetes, while unsaturated fats are inversely associated with risk.[41]
Hypertension and Diet
The most famous study linking diet and hypertension is the Dietary Approach to Stop Hypertension (DASH) study. After eight weeks of a diet rich in fruits and vegetables, an 11.4 mm Hg drop in systolic blood pressure and 5.5 mm Hg drop in diastolic blood pressure was seen in those with hypertension compared with the control group.[44] The Trials of Hypertension Prevention II showed a reduced risk of hypertension through weight loss and exercise, with a relative risk of 0.81 at 36 months. The relative risk of hypertension after 3 years was 0.35 for those who lost at least 4.5 kg and maintained the loss.45 The same study suggests that a decrease in blood pressure of 4.4/2.8 mm Hg can be obtained by a decreased sodium intake of 100 mmoL/d.[46] The Intersalt study showed a significant association between salt consumption and hypertension across 52 international centers. It also showed an association between the ratio of sodium to potassium in the diet and rates of hypertension.[47] Diets low in calcium and magnesium are also associated with elevations in blood pressure.[48] The CARDIA study followed young people (18 to 30 years old) for 15 years and revealed significant associations between dietary content and hypertension. It found plant food intake to be protective (relative risk between extreme quintiles 0.64), while red and processed meat consumption was associated with increased risk.[49]
The Optimal Macronutrient Intake Trial to Prevent Heart Disease (OmniHeart) evaluated the following three healthful diets in a randomized, crossover design over a six-week period. The comparators were 1) a carbohydrate-rich diet, similar to the DASH diet, 2) a protein-rich diet, with approximately half of the protein derived from plant sources (such as grains, legumes, nuts, and seeds), and 3) a monounsaturated fat-rich diet (principally olive, canola, and safflower oils). All three diets lowered BP and lipids from baseline, with the best results being obtained with the protein-rich and unsaturated fat-rich diets. The investigators write, Partial substitution of carbohydrate with either protein or monounsaturated fat can further lower blood pressure, improve lipid levels, and reduce estimated cardiovascular risk. They further conclude that in addition to salt, potassium, weight, alcohol, and the DASH diet, macronutrients also affect blood pressure, and that the DASH diet can be improved.[50]
Conclusion
There is sufficient evidence to argue that diet contributes directly or indirectly to the rates of seven of the top ten causes of death in the United States. The four causes from that list that were discussed above (heart disease, cancer, stroke, and diabetes) together account for 61% of all deaths in the US.[51] The dietary habits that result in reduced rates of a multitude of different diseases are remarkably similar for each disease. These habits are accurately reflected in the revised USDA Dietary Guidelines for Americans 2005, available at http://www.health.gov/dietaryguidelines/dga2005/document/default.htm. Okinawans, who have an exceptional longevity, have been studied at home in Japan and abroad in Hawaii and Brazil utilizing biomarkers for fish and soy intake as well as interventional studies. Results indicate that fish and soy, along with seaweed and green vegetables, are candidates for chronic disease prevention as well as overall promotion of longevity.[52]
There is more than enough evidence to confidently make dietary recommendations for populations. In some illnesses, further study is needed to clarify the magnitude of benefit an individual patient might expect to see with dietary modification. Data from the Adventists Health Studies suggests that the overall benefits from multiple positive lifestyle factors could account for up to a 10-year difference in life expectancy.[53]
Top
From Medscape Medical News
High Plasma Urate Strongly Linked to Reduced PD Risk
Caroline Cassels
June 29, 2007 — A new study has found high levels of plasma urate are strongly associated with a reduced risk for Parkinson's disease (PD), a finding that may ultimately have implications for slowing disease progression.
In a large, prospective study, investigators at Harvard School of Public Health, in Boston, Massachusetts, found men in the top quartile of blood urate concentration had a 55% lower risk of developing PD than men in the bottom quartile.
"The data are very compelling and establish urate as the only known biomarker for Parkinson's disease," principal investigator Marc Weisskopf, PhD, ScD, told Medscape.
Even more exciting, said Dr. Weisskopf, is the possibility that raising urate levels may have a therapeutic impact in individuals who already have the disease.
"We're not there yet, but it's possible we're on the verge of finding a treatment to slow PD. Work done by a couple of my coauthors, which has not yet been published, does seem to suggest that increased urate levels may be related to a slower rate of disease progression," he added.
The study was published online June 20 in the American Journal of Epidemiology.
Powerful Antioxidant
Urate, a powerful antioxidant, could potentially work by preventing oxidative stress, researchers speculate, which appears to play a key role in the progressive loss of dopaminergic neurons in the substantia nigra that characterizes PD.
The study cohort included 18,018 men who were participants in the Health Professionals Follow-up Study. Between April 1993 and August 1995, blood samples were collected and participants were followed for incident PD until 2002.
For each confirmed case of PD, there were 2 randomly selected controls with no PD diagnosis who were matched by age, race, and time of blood collection. Participants were then divided according to urate concentration quartiles, which were then compared with PD status.
A total of 84 incident cases of PD were diagnosed. The mean plasma urate concentration was 5.7 mg/dL for cases and 6.1 mg/dL among age-matched controls.
According to Dr. Weisskopf, there have only been 2 previous prospective studies that have investigated the relationship between plasma urate and PD risk. Both suggested that individuals with high serum levels have a lower PD risk. However, these associations were either nonsignificant or only marginally significant.
Compelling Evidence
However, as part of this study, the investigators conducted a meta-analysis of the 3 studies. "When we combined the results of all the research to date, we were surprised by the strength and consistency of the results. When you put all the data together, the evidence is very, very compelling that urate is indeed associated with a lower risk for PD and worth pursuing as a possible neuroprotective strategy," he said.
However, he added, high urate levels are associated with increased risk for overall mortality, adverse kidney effects kidney and possibly cardiovascular effects, and an increased risk for gout. As a result, it is unlikely that raising urate levels would ever be adopted as a preventive PD strategy.
On the other hand, he said, in patients who already have PD, it may well offer a viable treatment strategy. However, he said, this hypothesis needs to be confirmed in a large, randomized controlled interventional trial.
"We don't want to just wantonly increase urate levels in everybody, because it is not at all clear that the risk/benefit balance will come out in a positive way. However, if you restrict this to individuals with PD, that may change the equation," he said.
Increasing Urate Easily Done
Increasing urate levels can be done relatively simply — either through a diet high in purine-rich foods, which include organ meats, legumes, mushrooms, spinach, asparagus, and cauliflower, as well as certain types of fish such as sardines and herring. Beer and other alcoholic beverages also raise urate levels. In addition, said Dr. Weisskopf, inosine, an over-the-counter supplement frequently marketed to bodybuilders to "increase energy," can also raise plasma urate levels.
In addition to an interventional trial to examine the question of disease progression, future research will also expand on the limited data to date on this association in women and explore the role of diet in disease etiology as it relates to urate levels and the possibility that genetic/environmental interactions may work to increase susceptibility to urate's potential protective effects.
Am J Epidemiol. 2007. Published online June 20, 2007.
Top
From Nutrition and Metabolism
Health Implications of Fructose Consumption: A Review of Recent Data
A Review of Recent Data
Salwa W Rizkalla
Posted: 01/12/2011; Nutr Metab. 2010;7 © 2010 BioMed Central, Ltd.
Abstract and Introduction
Why is Fructose of Concern?
Fructose Consumption and Body Weight
Fructose, Lipogenesis and Cardiovascular Risk Factors
Fructose and Insulin Resistance
Fructose Ingestion Acutely Elevates Blood Pressure
Fructose Consumption and the Risk of Gout in Humans
Fructose and Exercise
Other Beneficial Effects
Conclusions
References
Abstract and Introduction
Abstract
This paper reviews evidence in the context of current research linking dietary fructose to health risk markers.
Fructose intake has recently received considerable media attention, most of which has been negative. The assertion has been that dietary fructose is less satiating and more lipogenic than other sugars. However, no fully relevant data have been presented to account for a direct link between dietary fructose intake and health risk markers such as obesity, triglyceride accumulation and insulin resistance in humans. First: a re-evaluation of published epidemiological studies concerning the consumption of dietary fructose or mainly high fructose corn syrup shows that most of such studies have been cross-sectional or based on passive inaccurate surveillance, especially in children and adolescents, and thus have not established direct causal links. Second: research evidence of the short or acute term satiating power or increasing food intake after fructose consumption as compared to that resulting from normal patterns of sugar consumption, such as sucrose, remains inconclusive. Third: the results of longer-term intervention studies depend mainly on the type of sugar used for comparison. Typically aspartame, glucose, or sucrose is used and no negative effects are found when sucrose is used as a control group.
Negative conclusions have been drawn from studies in rodents or in humans attempting to elucidate the mechanisms and biological pathways underlying fructose consumption by using unrealistically high fructose amounts.
The issue of dietary fructose and health is linked to the quantity consumed, which is the same issue for any macro- or micro nutrients. It has been considered that moderate fructose consumption of ?50g/day or ~10% of energy has no deleterious effect on lipid and glucose control and of ?100g/day does not influence body weight. No fully relevant data account for a direct link between moderate dietary fructose intake and health risk markers.
Introduction
Fructose, a natural sugar found in many fruits, is consumed in significant amounts in Western diets.[1] In equal amounts, it is sweeter than glucose or sucrose and is therefore commonly used as a bulk sweetener.
An increase in high fructose corn syrup, as well as total fructose, consumption over the past 10 to 20 years has been linked to a rise in obesity and metabolic disorders.[2] This raises concerns regarding the short and long term effects of fructose in humans.
Why is Fructose of Concern?
Fructose has been claimed to be of concern due to several factors: First, in the 1980's, sucrose was replaced to a large extent, particularly in North America, by high fructose corn syrup (HFCS) in carbonated beverages. The intake of soft drinks containing HFCS has risen in parallel with the epidemic of obesity.[3] Second, dietary fructose has been implicated in risk factors for cardiovascular disease (CVD): 1. Plasma triglycerides (TG) and VLDL-TG increased following the ingestion of large quantities of fructose; 2. Fructose intake has been found to predict LDL particle size in overweight schoolchildren;[4] 3. A positive relationship has been demonstrated between fructose intake and uric acid levels.[5] Third, the use of fructose as a sweetener has increased. The third National Health Examination Survey (NHANES) demonstrated that over 10% of Americans' daily calories were from fructose.[6] These studies suggest that the relationship between fructose and health needs re-evaluation.
Fructose Consumption and Body Weight
Lipogenesis from fructose consumption may theoretically be greater than that induced after eating other types of sugars such as glucose and sucrose.[7] But could this be physiologically true?
Evidence from Experimental Studies in Animals
The evidence of the action of dietary fructose, but not glucose, on increasing appetite and food intake in acute-term studies has been derived mainly from experimental studies in animals. Although glucose and fructose utilize the same signaling pathway to control food intake, they act in an inverse manner and have reciprocal effects on the level of the hypothalamic malonyl-CoA, a key intermediate in the hypothalamic signal cascade that regulates energy balance in animals.[8] When injected into the cerebroventricles of rats, fructose has been found to induce increase in food intake via a reduction of hypothalamic malonyl-CoA levels, whereas similar concentrations of injected glucose increased malonyl- CoA suppressing appetite-agonist and food intake.[9] The rapid initial steps of central fructose metabolism deplete hypothalamic ATP level, whereas the slower regulated steps of glucose metabolism elevate hypothalamic ATP level. Consistent with its effects on the [ATP]/[AMP] ratio, fructose increases phosphorylation/activation of hypothalamic AMP kinase causing phosphorylation/inactivation of acetyl-CoA carboxylase, whereas glucose mediates the inverse effects.
The question has been raised as to whether fructose may induce the same effects if presented in the systemic circulation and not injected directly in the brain. Consequently, Cha et al,[10] demonstrated that when glucose was administered intra-peritoneally, hence entered the systemic circulation, it was rapidly metabolized by the brain, increasing the level of hypothalamic malonyl-Co-A. Fructose administration, however, had the opposite effect on malonyl-Co-A and food intake. Such a finding might appear to set off another alarm bell about the problems of dietary fructose. However, closer inspection reveals that the latter study used only 4 mice, which were injected with a dose of 4g/Kg of body weight, a dose too large to be considered relevant to human nutrition. While this paper demonstrated that high doses of fructose and glucose acted on different pathways, the physiological significance of these results remains unclear. Fructose ingestion is unlikely to increase fructose levels in the cerebrospinal fluid, and plasma fructose levels will never exceed the micromolar range under physiological conditions. Some authors suggested the uncertainty of these effects.[11] Therefore, no evidence of cause for health concern could be drawn from such acute studies in rodents.
The effects of fructose on body weight were further questioned. When rats were fed a high fructose diet (60%) for 6 months then switched to a high fat diet for 2 weeks, leptin levels increased and a state of leptin resistance was found prior to increased adiposity and body weight induced by the high fat diet.[12] However, in other shorter term studies (3–6 weeks) high fructose feeding (57% in weight) induced insulin resistance and hypertriglyceridemia in rats but failed to induce an increase in body weight.[13–15]
Thus, in rodents while excessively high fructose intake may increase appetite by different mechanisms, its' effect on body weight needs long term dietary periods.
Acute Studies in Humans: Fructose, Food Intake and Satiety
Sugars and sugar sweetened beverages have been blamed for causing obesity, but the debate has raged for many years with little resolution.[16] More recently, the intensity of the debate was fuelled by the hypothesis that HFCS lead to obesity because fructose bypasses food intake regulatory system (insulin and leptin) and favors lipogenesis.[17] It was hypothesized that energy containing drinks, especially those sweetened with HFCS promotes energy imbalance and thereby play a role in the development of obesity. In an acute-term study,[17] 12 normal -weight women consumed meals containing 55, 30 or 15% of total calories as carbohydrate, fat and proteins with 30% of Kcal as either fructose sweetened or glucose sweetened beverages. As expected, glucose excursions and insulin secretion were lower after fructose meals than after glucose ones. This was associated to a decrease in leptin levels, which is an expected consequence of lowering insulin levels. It is important to notice that the reduction in leptin levels remained within physiological normal levels and fluctuated between: 9 ng/ml during the morning and 19.8 ng/ml by night. After this acute- term study, following only one meal, the authors rapidly suggested that because insulin and leptin (the main regulatory factors of food intake) were lower after fructose meals; they might increase caloric intake and ultimately contribute to weight gain and obesity. Fructose meals should be compared to sucrose the usual sugar and not to glucose which gives extreme levels.
The question was then raised whether HFCS has different effect on satiety than other isoenergetic drinks as sucrose or milk; again this question was investigated in an acute study. In order to have a simple response Soenen and Westerterp[18] compared the satiating effects of 4800 ml of HFCS, sucrose and milk containing each 1.5 MJ in comparison with a diet drink with no energy. They measured satiety by a visual analogue scale and by determining the satiety hormones (leptin and ghrelin) concentrations. They concluded that energy balance consequences were the same between the three isoenergetic drinks evaluated. Therefore, fructose in term of satiety is not different from that of usually consumed sugar and even that of another isocaloric drink (milk).
In another study Akhavan et al[19] aimed to evaluate whether HFCS in soft drinks is different from sucrose solutions. They compared solutions containing sucrose, HFCS, or various ratios of glucose to fructose (G50:F50) on food intake, average appetite and on plasma concentrations of glucose, insulin and ghrelin. Measurements were taken from base line to 80 minutes only. The authors of the latter paper concluded that all the solutions tested do not have significantly different effects on subjective and physiologic measures of satiety at a subsequent meal. Therefore, there is no solid evidence that sucrose, when consumed in its intact form, would confer any benefits over HFCS, which contains the 2 unbound monosaccharides.
Similarly, in a 24 hour study Stanhope 2008[20] and Melanson[21] did not find substantially different effects between meals with either sucrose or HFCS on 24 hour plasma glucose, insulin, leptin and ghrelin levels. Even TG profiles were found to be similar between the two tests. These responses were found to be intermediate between the lower responses after the pure fructose syrup consumption and the higher responses after glucose solution ingestion. There was no difference in food intake during a meal consumed 50 min later or in the components of food intake regulatory mechanisms.
Chronic Studies in Humans
Although acute fructose consumption could not stimulate leptin secretion, an increase in fasting leptin levels was detected after chronic high fructose intake (1 to 4 weeks) in healthy individuals, which may suggest that high fructose feeding may suppress food intake in the long term.[22] Another long term study in overweight/obese humans showed no change in body weight after 10 week-supplementation with glucose or fructose, indicating that the effect of fructose or glucose on food intake might not differ on long term bases.[23]
In a cluster randomized controlled study,[24] the effect of a focused educational intervention program on carbohydrate sweetened beverage consumption and overweight was studied using 644 children (7–11 years old). Children participated in a program designed to emphasize the consumption of a balanced diet and to discourage the consumption of sweetened drinks (mainly by sucrose: glucose/fructose). Sweetened drink consumption decreased in the intervention group and increased in the control one. Parallel changes in BMI occurred in each group, but without any difference between the two groups. Therefore, no conclusion could be given on appetite or body weight even if fructose is present as a part of sucrose.
Epidemiological Studies
The recent epidemiological study of Vos et al[6] created new concern in regards to fructose consumption. These authors analyzed data from the US population who had participated in the NHANES III study, collected from 1988 to 1994. 21,483 adults and children 2 years of age or older were included in this study. Investigators found that fructose consumption had increased to 54.7g/d (10.2% of total caloric intake), compared to 37 g/d (8%) of total intake in 1977–1978. The consumption was highest among adolescents (12–18 years) at 72.8g/d (12.1% of total calories). They showed that over 10% of Americans' daily calories were from fructose.[6]
Bray et al[25] suggested that the increase in obesity in the last 35 years has paralleled the increasing use of high-fructose corn syrup (HFCS), which first appeared just before 1970. Current soft drinks and many other foods are sweetened with this product because it is inexpensive and has useful manufacturing properties. The fructose in HFCS and sugar makes beverages very sweet, and this sweetness may underlie the relationship between obesity and soft drink consumption. Indeed in the United States, HFCS has increasingly replaced sucrose in many foods and sweetened beverages, a fact that might appear to strengthen the hypothesis that there is a relationship between fructose and obesity. The parallelism between the increase in the consumption of high fructose corn syrup and dietary fructose and the rise in obesity over the past 10–20 years, linked fructose to the rise in obesity and metabolic disorders, mainly in the United States.
This is not the case in Europe or outside the United states, where fructose is consumed mainly from sucrose and fructose consumption is linked mainly to sugar consumption. Moreover, the evidence from metabolism studies on fructose alone is irrelevant to the HFCS and weight gain debate. Most of the studies dealing with the causes of obesity and over-weight have centered on HFCS.[26]
Cross-sectional Studies In a cross sectional study, when correlating the BMI of the NHANES 1988–1994 cohort to the results of 24 hour dietary recall and one food frequency questionnaire by a multivariate regression model, a positive association was found between consumption of carbonated soft drinks and the BMI of females.[27] Using a continuing survey of food intake for individuals (CSFII) in another cross-sectional study, Forshee et al[28] found that BMI had a statistically positive relationship with diet carbonated soft drink consumption for both boys and girls (n= 1749) children (6–11 years) and adolescents (12–19 years). Other cross-sectional studies in American children demonstrated a positive correlation between soft drinks and BMI.[29,30] When looking at Pacific Island children living in New Zealand, where HFCS is very limited, the consumption of sucrose has been evaluated and correlated to body weight. The obese children consumed more of all types of food with no difference between obese and non obese children's consumption patterns.[30]
Most of the cross-sectional studies included no controls for sedentary behaviors, physical activity, and energy intake from other sources other than beverages in the model. Moreover, in these studies BMI and beverage consumption were self-reported and hence subject to measurements errors. Causal relationship cannot be made from cross-sectional study design.
In longitudinal epidemiologic studies, such as the US Growing Up Today Study (GUTS) in a cohort of more than 10,000 males and females (9–14 years in 1996), authors did not find a correlation between BMI and snack food consumption, including sugar-sweetened beverages[31] when controlling for total energy.[32] In the North Dakota Special Supplemental Nutrition Program for Women, Infants, and Children (WIC),[33] no significant association was detected between any of the beverages evaluated and BMI. Even in another study among 30 children aged 6–13 years attending the Cornell Summer Day in 1997,[34] excessive sweetened drink consumption (>370g/day) did not correlate with weight gain. Again results of these longitudinal studies are not conclusive. Most of the positive correlations presented disappeared when corrected by total energy.
Meta-analysis linking soft drink consumption and body weight demonstrated conflicting results. One meta-analysis of 12 studies in children and adolescents[35] failed to find a positive association between soft drink consumption and body weight, where as another meta-analysis dealing with 88 studies found an association.[36]
Conclusion
The relation between HCFS and obesity has been derived mainly from epidemiological studies trying to relate the increase in consumption of dietary fructose and HFCS on one hand and to the increase in obesity (see ref.[37] In the epidemiological, cross -sectional and longitudinal studies, the overall evidence for a positive correlation between consumption of soft drinks and overweight is limited. Causal inferences cannot be made from cross-sectional study designs with values subjected to measurement error. The interventional acute studies (24 hours) found that fructose is thought to be associated with insufficient secretion of insulin and leptin and suppression of ghrelin when compared with pure glucose. Such a difference, however, could not be demonstrated when HFCS compared with sucrose, the commonly consumed sweetener. In addition appetite and energy intake do not differ in the acute-term. There are no long-term interventional studies investigating the direct relationship between HFCS and body weight,[38] with the exception of Tordoff et al[39] who compared the consumption of 4 bottles of soda/day (1135g) as HFCS or as soda sweetened with aspartame for 3 weeks. Unsurprisingly, subjects who consumed the HFCS as extra calories gained more weight than those consuming the soda with aspartame. There is evidence that body weight increases when calorie intake is in a positive balance, regardless of whether this is due to HCFS, fat, proteins or any other form of calories. Moreover, in a recent meta-analysis, no significant effect of fructose consumption could be demonstrated on body weight with doses ? 100g/day in adults.[40] Unfortunately the recent focus on HFCS has done little to resolve the role of sugars in contributing to energy imbalance.
Meanwhile, a positive effect of fructose on satiety was demonstrated in the 1990's. The group of Rodin et al[41–43] demonstrated that the intake of 50 g fructose alone as the sole source of carbohydrate, either in solution or in the form of puddings 2 hours 25 minutes before a meal, caused a decrease in appetite and lipid intake. Therefore, this could even be used as an adjunct to weight control efforts.
Important Points It is clear that fructose is poorly absorbed from the digestive tract when it is consumed alone. However, absorption improves when fructose is consumed in combination with glucose and amino acids.[44] In addition, the principal sweetener in soft drinks in the US, HFCS, is not pure fructose but a mixture of fructose (55%) and glucose (45%). HFCS is predominately present as HFCS-55 (55% fructose, 41% glucose, and 4% glucose polymers) or HFCS-42 (42% fructose, 53% glucose and 5% glucose polymers).[26] Therefore, the term "high fructose corn syrup" is not a good descriptor of its composition, but the term was mandated to distinguish the newly developed fructose-containing corn syrup from traditional all-glucose corn syrups. Factors that may account for the different effects of fructose alone or a mixture of fructose and glucose could be its gastrointestinal effects and absorption characteristics.[45]
It should also be noted that even in a study that increased further the concerns about fructose intake,[4] which looked at overweight Swiss children, the authors could not demonstrate any correlation between fructose intakes and adiposity or any other lipid variables in children (cholesterol, triglycerides), with the exception of LDL particle's size.
Clearly fructose itself is not driving the obesity epidemic, but there is evidence supporting the possibility that refined carbohydrates in general could have a contributory role, if not a major one. Very recently, this problem has been attributed to all added sugars (high- fructose corn syrup or fruit-juice concentrates), and not only added fructose.[46]
Fructose intake as well as HFCS may be a contributor, but it's not the sole problem. Obese subjects consume too many calories for their activity level, including too much fat, protein and sugar. It is clear that energy imbalance for most individuals is caused by energy intake exceeding expenditure. A dietary solution to obesity remains elusive, but focusing on reducing one food item is unlikely to succeed.[47,48] Moreover, overweight and obesity are influenced by many genetic[49–51] and environmental factors:[52] for instance:
a) promoting water consumption can prevent overweight among children in elementary school;[53] b) habituation on behavioral and physiological responses to repeated presentations of food;[54] c) addressing specific eating patterns[55] and d) efforts to reduce fast food portion size.[56]
Whatever the cause of obesity, based on the currently available evidence, an expert panel formed by the Centre of Food Nutrition and Agriculture Policy concluded that HFCS does not appear to contribute to overweight and obesity any differently than other energy sources.[26]
Fructose, Lipogenesis and Cardiovascular Risk Factors
Another concern with fructose intake is that it may induce hypertriglyceridemia and lipogenesis. Theoretically, fructose consumption can result in increasing TG synthesis.[57]
Intestinal Absorption
fructose is absorbed from the intestine via glucose transporters 5 (GLUT 5), then it diffuses into the blood vessels through GLUT 2 or 5,[58] but mainly by GLUT 2. Contrary to glucose, fructose absorption from the intestinal lumen does not require ATP hydrolysis and is independent of sodium absorption, which results in massive fructose uptake by the liver.[59]
Hepatic Metabolism (Figure 1)
The hepatic metabolism of fructose differs also greatly from that of glucose. Contrary, to glucose, fructose is metabolised exclusively in the liver by fructokinase (Km: 0.5 mM). Glucose, however, tends to be transported to the liver but could be metabolized anywhere in the body by glucokinase (Km of hepatic glucokinase: 10mM).
Figure 1.
Fructose and glucose metabolism in liver cells: After several steps glucose is converted into fructose1,6-bi-phosphate. A reaction regulated by the rate-limiting enzyme phosphofructokinase, which is inhibited by ATP and citrate. Altogether the conversion of glucose to pyruvate is regulated by insulin. On the other hand, fructose, is massively taken by the liver, and converted rapidly to triose-phosphate independently of insulin control and without a feedback by ATP or citrate. A large portion of fructose is converted into glucose which can be released in the blood or stored as glycogen. A part is converted into lactate. A small portion is converted into fatty acids, which may play an important role in the development of hypertriglyceridemia and fatty liver.
In the liver glucose is first phosphorylated by glucokinase to give glucose-6-phosphate, which is then converted to fructose -6-phosphate, and further to fructose 1,6-bisphosphate. This process is regulated by the rate-limiting enzyme phosphofructokinase, which in inhibited by ATP and citrate. Fructose 1,6-bisphosphate is converted into pyruvate prior to entry into the Krebs cycle. The hepatic conversion of glucose to pyruvate is regulated by insulin.
In contrast, the conversion of fructose into triose-Phosphate is a rapid process independent of insulin. Fructose bypasses the main regulatory step of glycolysis (the conversion of glucose-6-phosphate to fructose 1,6-bisphosphate controlled by phosphofructokinase) and hence can continuously enter the glycolytic pathway. This rapidity is due mainly to the low Km of fructokinase for fructose, and the absence of negative feedback by ATP or citrate.[60] A portion of triose-phosphate produced from fructose can subsequently be converted into pyruvate and oxidized into CO2 and water. Another portion is converted into lactate to be released into the circulation.[61] The major portion of the triose-phosphate produced from fructose is converted into glucose and glycogen through gluconeogenesis.[62] At the end, part of the carbons from fructose can be converted to fatty acids. Simultaneously, fructose inhibits hepatic lipid oxidation favouring fatty acid reesterification and VLDL-triglyceride synthesis.[63] Therefore, fructose can rapidly and without any control produce glucose, glycogen, lactate, and pyruvate, providing both the glycerol and acyl portion of acyl-glycerol molecules. These particular substrates and the lack of regulation of this pathway could result in large amounts of TG that can be packed into very-low density lipoproteins by the liver.
It is essential to note that the general disposition of fructose carbon between its major end products is modified by nutritional and endocrine status.[64] Once fructose has been catabolized to three-carbon molecules its subsequent metabolic fate is identical to that of glucose. Hence, fructose can also be converted to glycogen once a positive energy balance has been established. On the other hand, glucose is mainly stored as glycogen in the liver, but high glucose levels may increase formation of glycerol -3 phosphate and accelerate hepatic triglyceride production.[65]
TG Clearance
Moreover, as VLDL goes into the bloodstream, these TG can be hydrolyzed by lipoprotein lipase (LPL) into non-estrified fatty acids and monoacylglycerol. These components could be taken by adipose tissue to re-synthesise TG. However, fructose consumption does not lead to insulin stimulation resulting then in low insulin excursions that may affect LPL-stimulated lipolysis and thus contribute to reduced TG clearance. Therefore, fructose consumption has been suggested to induce both increased hepatic TG that can be packed into very-low density lipoproteins by the liver and reduced TG clearance by adipose tissue.
Intestinal Origin of TG
Fructose-induced hyperlipidemia has been also hypothesized to be of intestinal origin. Jeppesen et al[66] demonstrated that the addition of 50 g fructose to an oral fat load (40 g) resulted in higher postprandial concentrations of triglycerides and retinyl palmitate in plasma and lipoprotein fraction (of intestinal origin). These results were found to be more pronounced with high fasting plasma triglyceride concentrations. The increase in plasma TG induced by high fructose diet in hamsters, was demonstrated to originate from fructose conversion into fatty acids within the enterocytes, with overproduction of apoB-48-containing lipoprotein.[67,68]
Evidence from Experimental Studies in Animals
Evidence of fructose induced lipogenesis comes mainly from studies in rodents.[69,70] In fact, evidence exists that consuming large amounts of fructose leads to the development of a complete metabolic syndrome in rodents.[71–73]
In the liver, the ability to metabolize fructose more rapidly than glucose into different metabolites has been demonstrated in rats.[74] The ratio of fructose metabolism/glucose metabolism (F/G) varies between a minimum of 3 for lactic acid, pyruvic acid, CO2 and free fatty acids, and a maximum of 19 for glyceraldehyde-glycerol.
On the other hand, it has been demonstrated that feeding rats with 75% (w/w) fructose or glucose diets increased the capacity for triglyceride formation from glycerol-3-phosphate by rat liver homogenates and increased incorporation of [1,3-14C] glycerol into hepatic TG by the intact animal.[65] Hepatic TG production changed with a similar time-course characteristic for each diet. However, the 75% fructose diet produced a greater increase in both determinations, reaching a maximum after 11 days. Despite the increase in hepatic TG formation by both high-sugar diets, only the 75% fructose diet resulted in a consistent and sustained increase in serum TG. These results were suggested to be due to differences in the fractional rate of serum TG removal between the two groups. The authors proposed that high glucose intake most likely produces an early acceleration in the fractional rate of TG removal that fully compensates for any increased production, which could be related to increased insulin stimulated-adipose tissue lipoprotein lipase activity[75] and accelerated adipose tissue lipogenesis.[76–78] This is not the case with fructose, which does not stimulate insulin secretion.
Studies dealing with mechanisms underlying fructose-induced lipogenesis provided sufficient evidence in animals.[79] Enzymes implicated in hepatic lipogenesis were found to be increased by high fructose diets: Seven days on 60% fructose diet[80] induced an increase in hepatic sterol regulatory element binding protein (SREBP-1) expression, which is a key transcription factor responsible for regulating fatty acid and cholesterol biosynthesis, as wall as lipogenic gene expression including fatty acid synthase (FAS) and acetyl Co-A carboxylase in mice. It is of interest that glucose feeding could induce, via insulin stimulation, a short-term peak induction of SREB, whereas fructose caused gradual increasing of SREBP-1c activity, providing evidence that lipogenesis can be independent of insulin control, but may depend on carbohydrate availability.[81]
Other studies dealing with the effect of high fructose feeding on mitochondrial and peroxisomal ?-oxidation found that fructose has been implicated in reducing PPAR? in rat hepatocytes. Eight weeks of a high fructose diet induced a decrease in PPAR?, which is a ligand activated nuclear hormone receptor responsible for inducing mitochondrial and peroxisomal ?-oxidation.[82] Therefore, fructose might induce hepatic cellular lipid accumulation due to decreased lipid oxidation following reducing PPAR?.
Of interest, lipid accumulation in fructose-fed rodents has been suggested to be through intestinal flora. Recently, it has been shown the dietary alteration of intestinal flora increased levels of plasma lipopolysaccharides (endotoxin). Fructose fed mice were found to produce endotoxinemia and fatty liver that could be prevented with antibiotic treatment,[83] suggesting a bacterial origin of fructose induced fatty liver.
Adiposity and Fat Storage in Adipose Tissue Indeed high fructose feeding has been found to cause an increase in adiposity. High dietary fructose intake and increasing body adiposity is clearly linked in both rats submitted to 57% dietary fructose[69,84,85] and in mice consuming fructose containing soft drinks (HFCS, 15%, 61 Kcal/100ml, 52 g/day).[86] The increased adipose tissue mass in 3 or 6 week-fructose fed rats has been attributed in part to decreased isoproterenol-stimulated lipolysis and to the increased antilipolytic action of insulin.[69] Lipogenesis in rats, however, is found to be shifted to the liver because fructose feeding: 1. activates lipogenic enzymes such as fatty acid synthase and malic enzyme in the liver but not in the adipose tissue,[72,87] and 2. depresses conversion of glucose to lipids in adipose tissue.[13,87,88] Nevertheless, a recent study demonstrated that very long periods (6 months) on HFCS might increase adipose tissue fat in Sprague Dawley rats.[89]
Similarly, intracellular lipid accumulation in the cytoplasm of muscle fibres has been demonstrated after several weeks of high sucrose diet, not a pure fructose diet, leading to insulin resistance.[90]
Therefore, in animals a high fructose diet induces lipogenesis mainly in the liver or muscle fibers but not in the adipose tissue. However the increased adiposity in adipose tissue would most likely be due to decreased lipid mobilization. Various mechanisms were implicated. These results were induced with high doses of fructose either as dietary fructose or as drinks; and therefore, these effects in rodents could no be extrapolated to effects with physiologically significant amounts in humans.
Acute Studies in Humans
In an attempt to understand the mechanisms involved in fructose-induced hypertriglyceridemia and its contribution to de novo lipogenesis in an acute setting, in humans, the group of Frayn[91] used a high dose of fructose 0.75g/Kg body weight in a liquid breakfast of mixed macronutrients. [2H2] Palmitate and [U13 C] fructose or [U13 C] glucose were added to trace the handling of dietary fats and the fate of dietary sugars in the body. Compared with glucose, fructose consumed with the fat-containing liquid increased the 4-h appearance of the meal's fatty acid in VLDL. They found, however, that the large amount of fructose used led to impaired triacylglycerol clearance rather than contributing to de novo lipogenesis.
In addition, Parks and co-workers[7] aimed to determine the magnitude by which acute consumption of fructose in a morning bolus would further increase TG concentrations after the next meal. Six healthy subjects consumed carbohydrate boluses of sugar (85g each) in a random order followed by a standard lunch 4 hours later. Subjects consumed either a control test of glucose (100%), a mixture of 50: 50 or 25:75 (wt:wt) glucose:fructose. The investigators demonstrated that post meal lipogenesis increased in proportion to fructose concentration in a beverage: from 7.8% for 100g glucose beverage to 15.9% after a mixture of 50g glucose: 50g fructose and 16.9% after a mixture of 25g glucose: 75g fructose beverage. Body fat synthesis was measured immediately after the sweet drinks were consumed. This study concluded that fructose has an immediate acute lipogenic effect; with greater serum TG level in the morning, and after a subsequent meal, even if consumed as a small amount in a mixture of sugars. The small amount was either 50g or 75g taken with glucose in a beverage. However, it is misleading to suggest that the consumption of a specific food or food ingredient was the cause of obesity and the rise of Type 2 diabetes. Similar results with high fructose-sweetened beverages showed an immediate increase of acute 24-hour TG in obese men and women.[92]
On the other hand, the fate of fructose may be its oxidation and not only TG accumulation. Using an oral fructose load of 0.5 or 1 g/Kg (diluted in water), Delarue et al[93] reported that 56% or 59% of fructose load was oxidized over 6-h study. Again, a very high dose of fructose was used to examine this pathway.
The studies cited above used high amounts of fructose with or without labeled fructose to induce hypertriglyceridemia in an acute setting to evaluate underlying mechanisms. We can not draw negative conclusions about moderate amounts of fructose as the cause of obesity epidemic from these studies.
Chronic Studies in Humans
Swarbrick et al,[94] evaluated the metabolic effect of 10 week consumption of fructose-sweetened beverages (25% of total carbohydrates). The authors demonstrated that the consumption of fructose-sweetened beverages increased postprandial TG and fasting apo B concentration. They suggested that long-term consumption of diets high in fructose could lead to an increased risk of cardiovascular diseases. Nevertheless, the conclusion was drawn after a study undertaken in only 7 overweight or obese postmenopausal women with special metabolism and a special type of adiposity. Limitations of this study are mainly due to the substantial variations of postprandial TG, (see Figure 2-A). The presented SEMs are great with expected high and overlapping SD values. Moreover, this study in one group consuming fructose sweetened beverages lacks comparison with another group consuming sucrose sweetened beverages.
Figure 2.
Postprandial TG responses to fructose- and glucose sweetened beverage consumption. A. Changes of the area under the curve over 14 h sampling periods before and after 2 and 10 weeks of consuming fructose sweetened beverages at 25% of daily energy in 7 overweight or obese postmenopausal women, values are means ± SE, * :p < 0.05 vs 0wk (figure adapted from Swarbrick et al (94) ). B. Mean 24 hour TG and C. TG AUCs (23 h) before and after 2, 8 and 10 week consumption of glucose or fructose-sweetened beverages at 25% of daily energy intake in overweight/obese humans (G=glucose group: n= 14; F= fructose group: n= 17); values are means ± SEM, * :p < 0.05 vs 0wk in the fructose group (figures adapted from Stanhope et al [23]).
Later, the same group[23] using a similar protocol, but in a group of overweight/obese subjects (16 men and 16 women), compared the effect of glucose to that of fructose- sweetened beverages providing 25% of energy requirements for 10 weeks in overweight and obese subjects on visceral adiposity, plasma lipids and insulin sensitivity. The carbohydrate intake of these subjects was 25% from sweetened beverages and 30% complex carbohydrates. This means that fructose or glucose represented half of the provided carbohydrates; as mentioned in the study: this amount was higher than 15.8% (the current estimate for the mean intake of total added sugars by Americans).[95] The authors evaluated the effect of the sweetened beverages with an ad libitum diet, meaning that subjects could eat as much as they want without any special recommendation or counseling concerning food intake. As expected both groups exhibited significant increase in body weight, fat mass, and waist circumference, without any difference between the two groups. The authors said that visceral adipose volume was significantly increased only in subjects consuming the fructose-sweetened beverages. However, it was not clear how total visceral adipose tissue was measured. The authors cited that they had done a CT scan at the umbilicus level. This means that this was in a one cross section at one level. Moreover, even by DEXA measurements (Dual energy X-ray absorptiometry) visceral or subcutaneous adipose tissue could not be estimated precisely. Therefore, it is misleading to say that in such a study visceral fat is increased by fructose- sweetened beverages. On the other hand, it is not surprising that high amounts of fructose might induce postprandial hypertriglyceridemia as well as increase fasting LDL and apo B. The limitation of this study is the great variations in the SEM presented (figure 2B and 2C). In addition, while it is true that fructose consumption increased the 23-hour postprandial TG AUC as well as the mean 24h TG compared to results before fructose consumption, there was no significant difference between glucose- and fructose-sweetened beverage consumptions (Figure2B and 2C).
Havel et al[92] demonstrated later that the increase in TG excursions during 24 hours (Area under curve) of fructose beverages depends mainly on the degree of insulin resistance of obese subjects.
Recently, Lê et al[96] found that a 7-day hypercaloric high-fructose diet (3.5 g fructose/kg/day, +35% energy intakes) increased ectopic lipid deposition in liver and muscle and fasting VLDL-TG as could be expected with these high amounts. The alteration in plasma lipids was more pronounced in a group of healthy offspring of patients with type 2 diabetes, who might be more susceptible to developing lipid alterations when subjected to high fructose intake. This is in agreement with the finding of the same group in 7 healthy men[22] demonstrating that four weeks of a high fructose diet containing 1.5 g fructose/kg body weight/day increased plasma TG but without causing liver or muscle lipid deposition or insulin resistance in these healthy subjects.
One of the effects of fructose intake is a suppression of plasma free fatty acids, which suggests an inhibition of adipose tissue lipolysis.[97] While this has been confirmed in rats on isolated adipocytes,[69] the same effect has been shown in healthy subjects after 7 days on a high fructose diet.[98]
In humans, in acute as well as in chronic studies, high (>15% Energy, more than 50g/day), fructose feeding has been found to elevate daylong serum triglycerides in healthy subjects ([17, 99–102]103), diabetic patients[104] and overweight/obese subjects.[23,105] Evidence exist that the elevated postprandial triglyceride levels as well as lipid deposition in liver and muscle depend on insulin resistant status of the subjects.
Epidemiological Studies
In a longitudinal study Fung et al[106] found that women who drink two or more servings of sweetened beverages per day may increase their risk of heart disease by 35 per cent. The study evaluated data from 88,520 women 34 to 59 years old participating in the Nurses' Health Study. The women were free of coronary heart disease or diabetes at the end of the study in 1980. Seven food-frequency questionnaires between 1980 and 2002 were used to evaluate dietary habits. While in this study subjects were put on all sweetened beverages. The authors accused fructose, since it had been the major sweetener in the sugar sweetened beverages. However, none of the observational data were able to establish causality.
While most studies have been conducted in adults, rare studies have been done in children.
Studying normal-weight and overweight 6–14 old Swiss children, Aeberli et al[4] aimed to determine whether LDL particle size is associated with dietary factors and especially with fructose intake. The authors used a cross-sectional, and not interventional, study in 74 children and dietary intakes were estimated by using two 24-h recalls and a one-day dietary record. Although there were no significant differences in total fructose intake, the authors concluded that after adjusting the results for adiposity, fructose intake was a significant predictor of LDL particle size, which was significantly smaller in the overweight children than in the normal weight ones. However, upon further examination, these values (Figure 3), the LDL particle size, while described as statistically different, could not have significant clinical impact with only a 1.7% reduction between the two groups with overlapping of values (great SD). This study gave quite a negative image of fructose and reopened the debate on whether fructose consumption itself was a health risk. Again it must be noted that this was a cross sectional study and that the main outcome is based on dietary recalls or dietary records. Dietary recalls, even when validated, can not give precise results, particularly in children, because their ability to record or remember their diet is limited.[107,108] In this study there was no association between fructose consumption and HDL, LDL, total cholesterol or triacylglycerol. The study failed to demonstrate an increase in total fructose intake in the overweight children. However, the authors cited that overweight children consumed significantly less fructose, as a percentage of total fructose, from fruits and vegetables but more fructose, also as a percentage of total fructose, from sweetened drinks and sweets. This is some what misleading, because the absolute amounts of fructose intake from fruits and vegetables or from sweet drinks did not differ significantly between the two groups. In addition, the correlation between LDL size and total fructose intake was poor, ? = - 0.245. This poor correlation, however, could not confirm a causal relationship. In a debate entitled "Fructose: Sweet or Bitter for Diabetes" that took place during the 26th Symposium on Diabetes and Nutrition Study Group (DNSG, 2008, Varna, Bulgaria) of the EASD, the author (Dr Isabelle Aeberli) admitted that the problem with fructose is due mainly to the amount utilized and not to fructose itself. Moreover, the generation of small triacylglycerol rich lipoprotein particles, such as generated by fructose, does not itself seem to be a sufficient condition for atherogenesis.[109]
Figure 3.
LDL particle size in 6 to 14 years old Swiss children, values are means ± SD, (Figure adapted from Aeberli et al [4].
Meta Analysis
In a recent meta-analysis Livesey and Taylor[40] examined 60 studies looking at the link between fructose intake on fasting plasma TG and 25 studies dealing with the effect of fructose on postprandial plasma TG in humans. This meta-analysis included different types of subjects: healthy, with impaired fasting glucose, impaired glucose tolerance, type 2 diabetes, subjects with elevated risk of coronary heart disease, and subjects with any form of hyperlipidemia. The authors found that fructose intake < 50 g/d had no significant effect on triacylglycerol post- prandially and ? 100g/d had no significant effect on fasting levels but was associated with increased postprandial TG excursions. Consumption of 50 g fructose per day for up to 2 years is without effect on fasting plasma triacylglycerol in healthy individuals.[110] At a daily fructose dose >100g, the effect on fasting triacylglycerol depended on whether sucrose or starch was being exchanged with fructose. This effect was dose dependent, and was lower with increasing the duration of treatment. Different health types and sources of bias were examined showing no significant departure from a general trend.
In another meta-analysis, a Canadian group evaluated the differential effects of isocaloric exchange of fructose for other carbohydrates on triglycerides in peoples with diabetes.[111] They selected 14 papers meeting their criteria out of a total of 725 papers. There was no significant effect of the isocaloic exchange of fructose for CHO on TG with strong heterogeneity. In a further analysis separating patients with type 2 diabetes from those with type 1 diabetes, fructose was found to increase triglycerides in type-2 but not type-1 diabetes. This effect could be detected when high doses of fructose was taken (>65g/d) during short- term (?4 weeks) and when fructose substituted starch[112,113] but not sucrose.[114–116] Moderate fructose consumption (<50 g/d, or ~10% of metabolizable energy intake) has previously been considered acceptable in diabetics.[109,117,118]
Therefore, < 50g/day added fructose by day has no deleterious effect on both fasting and postprandial triglycerides.
Fructose and Insulin Resistance
Evidence from Experimental Studies in Animals
There is much evidence in animal models supporting the notion that fructose when consumed in high amounts contributes to hepatic and peripheral insulin resistance.[70,71,119,120] In rats fed a fructose- rich diet Thorburn et al,[120] using the hyperinsulinemic euglycemic clamp method, demonstrated lower insulin stimulated glucose uptake in hindlimb muscles and adipose tissues than in rats fed a dextrose rich diet. A decrease in skeletal and hepatic insulin receptor number, determined by an in situ autoradiography technique, as well as a decrease in their gene expression was found by 66% fructose feeding for 2 weeks in rats.[121] Moreover, decreased insulin-induced insulin receptor phosphorylation was demonstrated in the liver of fructose fed rats[122] A 57% fructose diet induced, similarly, a decrease in insulin stimulated glucose incorporation into lipids but increased the antilipolytic action of insulin in isolated adipocytes of normal rats.[13,69]
Three weeks of a 10% fructose-rich diet[123] induced adaptive changes in islets of rats: decreased ?-cell mass with increasing apoptotic cells, increased glucose-induced insulin release and islet glucose metabolism, increased glucokinase, but not hexokinase activity. These modifications resulted in an increase of insulin release in spite of marked ?-cell mass reduction leading to hyper insulinemia, impaired glucose tolerance and insulin resistance.
Here again, the high fructose fed rats used as a model of insulin resistance to evaluate the islet adaptive changes in such situations (peoples at risk of developing type 2 diabetes). Recently, the group of Havel[124] has demonstrated that 4 months of sustained fructose consumption (20% of energy) accelerate the onset of type 2 diabetes in a model of pylogenic obese type 2 diabetic rats. The presence of an antioxidant with insulin sensitizing activity ameliorates the effect of fructose by improving glucose homeostasis, which is likely due to preserving ?-cell function.
Moreover, fructose-fed rats demonstrated a defect in neural insulin signaling pathway in the brain. Decreased insulin stimulated-tyrosine phosphorylation of insulin receptors and insulin receptor substrate 1 (IRS-1) were demonstrated in the fructose-fed hamsters.[125] Also insulin-mediated phosphorylation of residues necessary for activation of another key effector of insulin signalling was markedly decreased.
Nevertheless, high fructose-fed rat model is often used in many studies as a dietary model of insulin resistance.[15,126,127] In rodents, therefore, there is no doubt that high-fructose feeding cause insulin resistance.
Acute Studies in Humans
In humans, hardly any evidence exists to confirm directly the negative effects of fructose on insulin sensitivity. Fructose has been considered as a therapeutic tool in the diet of diabetic patients due to its low glycemic index[128] and because it's initial metabolic steps do not need insulin.[79] It elicits an increase in energy expenditure that has been suggested to be beneficial for obese subjects with or without diabetes.[97,129] The effect of fructose infusion on hepatic insulin sensitivity in conditions of moderate hyperglycaemia has been studied during hyperglycaemic clamp study with or without infusion of 16.7 ?mol/kg/min fructose.[130] The acute fructose infusion induced both extra hepatic and hepatic insulin resistance, which has been suggested to be secondary to an increased intrahepatic glucose 6-phosphate synthesis. These results raise questions as to whether ingested fructose as part of the diet may have the same effects.
Chronic Studies in Humans
Consuming an extra 1000 Kcal as fructose, which is a high amount, for one week induced a reduction in both insulin binding and insulin sensitivity when compared to effects after the same amount of glucose in young healthy subjects.[131] In a special case, the presence of fructose as the unique source of carbohydrate in a very low calorie diet (600 Kcal) postponed by two weeks the expected amelioration of a low calorie diet for plasma glucose and insulin levels as well as insulin binding.[132]
Moderate fructose intake (1/3 carbohydrate intake), however, in healthy subjects for 2 weeks has no deleterious effect on insulin sensitivity compared to the same amount of sucrose.[133,134]
In healthy subjects, consuming up to1.5 g fructose/kg body weight per day for 4 weeks increased plasma triglycerides but without inducing insulin resistance.[135] The authors of the latter study were able, however, to detect early molecular alterations in only two skeletal muscle genes. They suggested, therefore, that these alterations could induce later whole body insulin resistance.[135] The same group showed that fructose overfeeding (3.5 g fructose/kg fat-free mass/day, again a high dose) for 6 days produces hepatic insulin resistance in men, whereas these effects are markedly blunted in healthy young men.[136]
In diabetic subjects, other chronic studies could not detect any deleterious effects of moderate fructose intakes: 30 g fructose/day compared to starch as a part of 1400 - 1600 Kcal for 8 weeks,[112] or one year[137] or 60g fructose/day for 12 weeks,[138] or 6 months.[139]
Using high amount of fructose, however, as fructose-sweetened beverages at 25% of energy requirements for 10 weeks, led to an increase in fasting plasma glucose and insulin levels and decreased insulin sensitivity compared to the same amount of glucose sweetened beverages.[23]
Epidemiological Studies
In a prospective large cross-sectional study -Nurses Healthy Study I and II- an association was found between high intake of fructose and the high C-peptide concentrations.[140] Due to this association, the authors suggested that fructose intake may play a role in the development of insulin resistance and type 2 diabetes. However, causal relationship could not be identified from this study design.
In a longitudinal study, Janket et al[141] evaluated the relationship between risk of type 2 diabetes and intakes of total caloric sweeteners, sucrose, fructose, glucose and lactose in a cohort of 38,480 female health professionals. Neither fructose, glucose nor sucrose was related to the risk of developing type 2 diabetes. Therefore, no difference could be detected between the different sugars.
While some investigators are able to detect deleterious effects with high doses or could not detect with moderate doses, others found beneficial effects. Koivisto et al[113] demonstrated that the substitution of moderate amounts of fructose (45–65 g/day: 20% of carbohydrate calories) for complex carbohydrates for 4 weeks improves insulin sensitivity in type 2 diabetic patients. Similarly Reiser et al[102] found that patients adapted to 20% of energy as fructose for 5 weeks had improved plasma glucose responses to a glucose charge compared to a group adapted to starch diet. In a group of children with diabetes 1g fructose/kg/day (30g/day maximum) with guar gum for three weeks was found to decrease HBA1c but with increased glucoseuria.[142]
In small doses, however, dietary fructose appears to be beneficial in enhancing glucose tolerance.[143,144] The addition of small doses of fructose to a glucose meal can enhance hepatic glucose disposal. Moreover, the addition of small amounts of fructose to orally ingested glucose increases hepatic glycogen synthesis and reduces glycemic responses in subjects with type 2 diabetes.[145] This effect was found to be due to a rise in Fructose-1-P which has an important indirect effect on hepatic glucose metabolism by modulating glucokinase activity which is a key regulatory enzyme required for the formation of glucose -6-P. Glucokinase also is involved in the inhibition of hepatic glucose release by portal hyperglycemia.[146] Fructose-1-P, at low levels antagonizes a glucokinase regulatory protein, enhancing, then, glucokinase activity. Stimulation of hepatic glycogen synthesis by this mechanism may be of potential therapeutic value. However, high doses could be deleterious.
Recently, a meta-analysis[40] demonstrated that fructose intakes from 0 to ? 90g/d have a beneficial effect on HbA1c. This meta-analysis was done on a group of studies in healthy, glucose intolerant and type-2 diabetes. The authors, however, are aware that 50 to 100g is a high fructose intake that could affect postprandial triglycerides. Whether a lowering or maintaining of low HbA1c with these doses of fructose would persist is unknown. We could conclude that moderate fructose consumption (<50 g/d, or >10%ME) appears acceptable and potentially beneficial.
Fructose Ingestion Acutely Elevates Blood Pressure
Brown and co-workers[147] showed recently that the acute ingestion of both glucose and fructose drinks (60 g) brings about specific hemodynamic responses. Fructose, in particular elicits an increase in blood pressure that could be probably mediated by an increase in cardiac output without compensatory peripheral vasodilatation.
While fructose-induced hypertension is well demonstrated in rodents via various mechanisms,[148] in humans long-term demonstration failed. In the Nurses' Health Study, fructose intake was not associated to the risk for developing hypertension.[149] Moreover, recently[136] in a chronic study using high fructose amount of 1.5 g/kg body weight by day for 4 weeks, there was no significant change in mean blood pressure at the end of four week-fructose diets. There is no existing evidence for the relation between fructose and hypertension in humans.
Fructose Consumption and the Risk of Gout in Humans
Prospective data has suggested that consumption of sugar sweetened soft drinks and fructose is strongly associated with an increased risk of gout in men.[150] They concluded that other contributors to fructose intake such as total fruit juice or fructose rich fruits (apples and oranges) were also associated with high risk of gout. In these studies information was provided on intake of soft drinks and fructose through validated food frequency questionnaires. These studies could not confirm a cause and effect relationship. When comparing 5 weeks of fructose consumption to 5 weeks of that of starch (20% of energy), serum uric acid increased with fructose intake.[102] The authors compared a simple sugar to a complex one; therefore, these findings could be simply due to the effect of a refined sugar. This hypothesis is likely, because when comparing 24% of carbohydrates consumed as fructose to that amount consumed as sucrose, no alteration in uric acid level was detected.[151] On the other-hand when a high fructose amount 250–290g/d was taken for 12 days an increase in both plasma and urinary uric acid was found.[152] Others believe that fructose-induced hyperuricaemia occurs mainly in gouty patients.[153]
Fructose Consumption and the Risk of Gout in Humans
Prospective data has suggested that consumption of sugar sweetened soft drinks and fructose is strongly associated with an increased risk of gout in men.[150] They concluded that other contributors to fructose intake such as total fruit juice or fructose rich fruits (apples and oranges) were also associated with high risk of gout. In these studies information was provided on intake of soft drinks and fructose through validated food frequency questionnaires. These studies could not confirm a cause and effect relationship. When comparing 5 weeks of fructose consumption to 5 weeks of that of starch (20% of energy), serum uric acid increased with fructose intake.[102] The authors compared a simple sugar to a complex one; therefore, these findings could be simply due to the effect of a refined sugar. This hypothesis is likely, because when comparing 24% of carbohydrates consumed as fructose to that amount consumed as sucrose, no alteration in uric acid level was detected.[151] On the other-hand when a high fructose amount 250–290g/d was taken for 12 days an increase in both plasma and urinary uric acid was found.[152] Others believe that fructose-induced hyperuricaemia occurs mainly in gouty patients.[153]
Fructose and Exercise
Substrate utilization during exercise with glucose and glucose plus fructose ingestion has been an important focus of study. In contrast to glucose during exercise, exogenous fructose has delayed the rate of intestinal absorption,[154] lowering the rate of oxidation during exercise[155,156] possibly as a result of its slower absorption rate and the necessity for its conversion to glucose by the liver before oxidation.[156] The combination of fructose and glucose, however, is well absorbed during exercise[157] and may facilitate a higher oxidation than either of the two sugars ingested separately.[158] The ingestion of glucose alone and glucose plus fructose delays exhaustion at 90% peak power by 25 and 40% after 90 minutes of moderate-intensity exercise.[159] While pre-exercise and exercise ingestion of glucose and fructose are of equal values in delaying exhaustion, ingestion of fructose before and during the exercise provide a more constant supply of available glucose to the working muscle.[160]
Other Beneficial Effects
Dietary fructose (20% of the calories from fructose) enhances mineral balance.[161] Another effect is that the intake of 250 ml of a drink rich in fructose after alcohol consumption will decrease the plasma alcohol levels by 10%.[162]
Conclusions
Certainly high fructose consumption can induce insulin resistance, impaired glucose tolerance, hyperinsulinemia, hypertriglyceridemia, and hypertension in animal models. There is no evidence for similar effects in humans at realistic consumption patterns. Although there are existing data on the metabolic and endocrine effects of dietary fructose that suggest that increased consumption of fructose may be detrimental in terms of body weight and adiposity and the metabolic indexes associated with the insulin resistance syndrome, much more research is needed to fully understand the metabolic effect of dietary fructose in humans. Despite the epidemiological parallel between the marked increase of obesity and fructose consumption, there is no direct evidence linking obesity to the consumption of physiological amounts of fructose in humans (? 100g/day). A moderate dose (? 50g/day) of added fructose has no deleterious effect on fasting and postprandial triglycerides, glucose control and insulin resistance. There is no existing evidence for a relation between moderate fructose consumption and hypertension. Fructose may induce hyperuricaemia, but mainly in patients with gout.
Beneficial effects of moderate amounts of fructose have also been demonstrated: 1. Fructose seems to decrease appetite when taken in a solution or puddings before a meal, 2. It seems to lower plasma glucose responses to orally ingested glucose via stimulation of hepatic glycogen, when added to the glucose challenge, 3. While pre-exercise and exercise ingestion of glucose and fructose are of equal values in delaying exhaustion, ingestion of fructose before and during the exercise provide a more constant supply of available glucose to the working muscle.
Two new reviews were published during the revision of this manuscript that strengthen our conclusions: The first is an evidence-based review[163] indicating that fructose does not cause biologically relevant changes in TG or body weight when consumed at levels approaching 95th percentile estimates of intake. This review is based on recent guidance developed by the US Food and Drug Administration (FDA).[164] The second review by Tappy and Lê[37] concluded that: 1) there is no unequivocal evidence that fructose intake at moderate doses is directly related with adverse events in man; 2) there is no direct evidence for more serious metabolic consequences of high fructose corn syrup versus sucrose consumption.
The implications of any balance of effects of fructose on different aspects of metabolism in terms of possible risk to health would need to be ascertained using more direct long-term intervention studies.
Top
From U.S. Pharmacist
Exploring the Link between Blood Pressure and Lifestyle
J. Paige High Carson, PharmD, CDE, BCPS
Posted: 04/06/2010; US Pharmacist. 2010;35(2):1-4. © 2010 Jobson Publishing
Abstract and Introduction
Introduction
Nearly 72 million people in the United States have hypertension (HTN), and one out of three American adults has HTN. In addition, one-third of people with HTN are unaware they even have high blood pressure (BP), which is why HTN is often referred to as "the silent killer."[1] Hypertension is defined as a BP >140/90 millimeters of mercury (mmHg). As BP rises, risk increases for heart failure, myocardial infarction, kidney disease, and stroke. For each 20 mmHg increase in systolic blood pressure (SBP) or 10 mmHg increase in diastolic blood pressure (DBP) above 115/75 mmHg, the risk of cardiovascular disease doubles.[2] A recent study conducted in nondiabetic patients supports treating to a target SBP <130 mmHg versus a target SBP <140 mmHg. The group achieving the lower SBP experienced significantly less development of left ventricular hypertrophy and cardiovascular events than the group treated to the usual SBP goal.[3] Current Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) classification and treatment of BP for adults is given in TABLE 1 .
Various lifestyle risk factors have been identified that elevate blood pressure and lead to HTN. Many of these risk factors have been well documented in the literature, and according to recent trials or new research awaiting publication, others have recently been postulated to affect BP ( TABLE 2 ). A healthy lifestyle is essential to preventing HTN and managing it successfully. Lifestyle modifications should be incorporated into every treatment regimen for prehypertension and HTN ( TABLE 3 ). Implementation of a healthy lifestyle decreases BP, reduces cardiovascular disease risk, and increases the efficacy of antihypertensive medications.[2]
Conventional Risk Factors for Developing Hypertension
Hypertension can develop because of a person's lifestyle, medication regimen, underlying health conditions, genetic history, or a combination of these factors. Nonmodifiable risk factors include advancing age, race, family history of HTN or premature heart disease, and other concurrent health conditions. Some of these health conditions include adrenal tumors, chronic kidney disease, congenital heart defects, diabetes, thyroid disorders, pheochromocytoma, and pregnancy. Hypertension is more common in African Americans and appears to develop at an earlier age in this race. Medications that may cause HTN include caffeine, chronic steroid therapy, oral contraceptives, nonsteroidal anti-inflammatory drugs (NSAIDs), cyclooxygenase-2 (COX-2) inhibitors, amphetamines and other stimulant drugs, cocaine, decongestants, weight loss drugs, cyclosporine and other immunosuppressants, erythropoietin, and OTC supplements (e.g., ephedra, licorice, ma huang).[2]
Established Lifestyle Risk Factors for Developing Hypertension
There are many modifiable risk factors for HTN, and the list seems to be growing steadily with ongoing research. Cigarette smoking is the single most common avoidable cause of cardiovascular death in the world.[4] Data from the CDC show that 21% of adults (18 years of age and older) in the U.S. currently smoke cigarettes.[5] Those who smoke 15 or more cigarettes per day have a higher incidence of HTN. Smoking immediately raises BP and heart rate transiently through increasing sympathetic nerve activity and myocardial oxygen consumption. Chronically, tobacco chemicals damage the lining of the arterial walls of the heart, resulting in artery stiffness and narrowing that can last for 10 years after smoking cessation. Smoking also increases the progression of renal insufficiency and risk of other cardiovascular complications.[4,6]
Obesity is estimated to be the leading cause of preventable illness in the U.S. Greater than two-thirds of HTN prevalence can be attributed to obesity.[7] The National Heart, Lung, and Blood Institute (NHLBI) defines obesity as having a body mass index (BMI) ?30 kg/m2.[2] Results from the National Health and Nutrition Examination Survey (NHANES, 2005–2006) indicate that 34.3% of the U.S. adult population is obese.[8] Obesity is most pronounced in the southeast region of the country. Overweight prevalence among children and adolescents also remains high in the U.S., with 10% of U.S. children classified as overweight or obese.[7,8] Abdominal adiposity, in particular, is linked to congestive heart failure, coronary artery disease, diabetes, sleep apnea, and stroke. Being overweight requires that more blood be supplied to oxygenate heart tissues, and as the circulated blood volume increases through the blood vessels, the pressure increases on the artery walls.[6,7]
Besides obesity, a lack of physical activity and sedentary lifestyle produce an increase in heart rate. An increased heart rate requires that the heart work harder with each contraction, and it exerts a stronger force on the arteries, thereby raising BP. Physical inactivity has also been linked to more health care office visits, hospitalizations, diabetes, and increased medication burden.[6,9]
Multiple dietary factors increase the risk for HTN. It is well known that excessive sodium intake leads to HTN. A diet high in salt causes the body to retain fluid, and increased water movement raises the pressure within the vessel walls.[6] The majority of the sodium in Western-style diets is derived from processed foods. High-salt diets decrease the effectiveness of antihypertensives in patients with resistant HTN. Resistant HTN is defined as having a BP above one's goal despite using three or more antihypertensive medications concurrently.[10] A high-salt diet can also increase the need for potassium. Potassium balances the amount of sodium within cells. If not enough potassium is consumed or retained, sodium accumulates in the blood. A diet low in potassium (<40 mEq/day) produces sodium accumulation through decreased sodium excretion, thereby leading to HTN. Potassium deficiency also increases the risk for stroke.[6,11]
Excessive alcohol consumption consisting of greater than two drinks per day for men or greater than one drink per day for women leads to sustained BP elevations.[2] Alcohol interferes with blood flow by moving nutrient-rich blood away from the heart.[12] Alcohol can also reduce the effectiveness of antihypertensives. Binge drinking, or having at least four drinks consecutively, may cause significant and rapid increases in BP.[13] Debate exists on whether low-to-moderate alcohol consumption raises or lowers BP.
Emerging Risk Factors for Developing Hypertension
A diet high in sugar, fructose in particular, raises BP in men, according to a recent study presented at the American Heart Association's (AHA) 2009 High Blood Pressure Research Conference.[14] High fructose consumption has also been linked to an increased risk of obesity. Fructose is a dietary sugar that is used in corn syrup and accounts for one-half of the sugar molecules in table sugar. High-fructose corn syrup is often utilized in packaged sweetened products and drinks due to its long shelf life and low cost. In this study, men consuming a high-fructose diet for just 2 weeks experienced an increased incidence of HTN and metabolic syndrome.[14]
Vitamin D deficiency (<80 nmol/L) may increase the risk of developing systolic HTN in premenopausal women years later, according to a study conducted in Caucasian women in Michigan.[15] In this study, presented at the AHA's High Blood Pressure Research Conference, researchers compared BP and vitamin D levels drawn in 1993 to those drawn 15 years later in 2007. Premenopausal women (average age of 38 years) with vitamin D deficiency in 1993 were three times more likely to have HTN in 2007 than those with normal vitamin D levels in 1993.[15]
Sleep deprivation raises SBP and DBP and may lead to HTN. In the recent Coronary Artery Risk Development in Young Adults (CARDIA) sleep study, sleep maintenance and sleep duration were measured in a group of adults aged 35 to 45 years and then repeated 5 years later on the same study population.[16] According to this study, shorter sleep duration and poor sleep quality increase BP levels and lead to HTN. Sleep deprivation may produce an increase in heart rate and sympathetic activity, evolving into HTN.[16]
A connection has been found between HTN and road traffic noise. An Environmental Health study published in 2009 measured loudness of road noise in decibels at the home address in a large number of adults and their incidence of self-reported HTN. A significant association was found for incidence of HTN and residing near a noisy road. Interestingly, a less prominent effect on BP was noted in the elderly when compared to younger adults. Possible explanations offered by the authors include that noise may be harder to detect in the elderly and may be less of an annoyance in the older population than in younger individuals. The study authors speculate that long-term exposure to noise causes endocrine and a sympathetic stress response on a middle-aged adult's vascular system, resulting in HTN and an elevated cardiovascular risk profile.[17]
A questionnaire completed by deployed American servicemen and servicewomen revealed that those reporting multiple exposures to combat had a significantly higher incidence of HTN than those reporting no combat. The elevation in BP is thought to arise from the high stress situation of combat exposure. Combat stress can result in significant physical and psychosocial stress to those deployed.[18]
Lifestyle Modifications for Treatment of Hypertension
Cigarette smoking is a modifiable cardiovascular risk factor that can have profound effects. Smoking cessation can result in immediate improvement in BP and heart rate after just 1 week.[19] A linear relationship has been discovered in improvement in arterial wall stiffness and duration of smoking cessation in ex-smokers. Achievement of a decade of smoking cessation results in remodeling to nonsignificant levels of arterial stiffness.[20] In addition to lowering BP, smoking cessation results in an overall cardiovascular risk reduction and reduction in mortality. Rigorous measures should be utilized to assist individuals in achieving smoking cessation.[2] Smoking cessation should be assessed and discussed at every available opportunity, whether it be inpatient, outpatient, or at the pharmacy. Studies have shown that when patients are told their lung age, they are more likely to quit smoking.[21] Pharmacists possess an enormous opportunity to assist patients in achieving smoking cessation by teaching patients about the various smoking cessation pharmacotherapy options. An explanation of how to properly use the medications (OTC and prescription), differences between them, and what to expect from the medications can improve adherence and the desired outcome of successful smoking cessation.
Weight reduction can have the most profound effect of all lifestyle modifications on lowering BP, leading to an approximate drop in SBP of 5 to 20 mmHg per 10 kg weight loss. The JNC 7 guidelines recommend weight reduction to maintain a normal body weight defined as a BMI between 18.5 and 24.9 kg/m2.[2] The Surgeon General's recommendations published by the U.S. Department of Health and Human Services advise determining a person's BMI and having him or her lose at least 10% of body weight if overweight or obese. It is also recommended to lose weight gradually at a pace of one-half to two pounds per week.[22]
Along with weight reduction, regular aerobic physical activity for 30 minutes or more per day most days of the week is recommended and results in an SBP improvement of 4 to 9 mmHg.[2] It is recommended that children be physically active for 60 minutes most days of the week. The Surgeon General recommends limiting television viewing to below 2 hours per day.[22]
The JNC 7 guidelines recommend multiple dietary modifications. The most notable and effective is adoption of the Dietary Approaches to Stop Hypertension (DASH) eating plan, which can lower SBP by 8 to 14 mmHg.[2] The DASH eating plan is equally efficacious to adding on a single antihypertensive medication. This diet plan includes a significant consumption of fruits and vegetables rich in potassium, which assists in maintaining optimal sodium to potassium ratio. The DASH eating plan is low in saturated fat and consists of low-fat dairy products. Sodium restriction is an important component of the DASH diet and also recommended independently in the JNC 7 guidelines. A reduction in sodium intake to ?100 mmol/day (6 g NaCl or 2.4 g sodium) can drop SBP by 2 to 8 mmHg. The DASH diet also provides details on how to check labels for sodium content and how to estimate sodium amounts in foods based on how they are cooked or prepared when eating in restaurants.[2,23] The Surgeon General also recommends selecting sensible portions.[22]
Limiting alcohol consumption to two drinks or less for most men and one drink per day or less for women is recommended by the JNC 7 guidelines. The equivalency of two drinks is defined as 24 oz of beer, 1 oz of ethanol (e.g., vodka, gin), 3 oz of 80-proof whiskey, or 10 oz of wine. A decrease in alcohol intake can lower SBP by 2 to 4 mmHg.[2]
Plausible Lifestyle Modifications for Treatment of Hypertension
Lowering fructose intake through limiting consumption of sweetened products could prevent rises in BP and development of metabolic syndrome. Reducing intake of sweetened drinks or processed foods that contain high-fructose corn syrup and lessening use of regular table sugar will lower intake of fructose.[14]
Vitamin D deficiency is widespread among women. It is speculated by some researchers that many women do not receive adequate sun exposure, obtain enough vitamin D in their diet, or supplement with enough vitamin D. The current recommended intake of vitamin D for this population is 400 to 600 IUs per day, though some researchers suggest a higher intake of daily vitamin D. Knowing one's vitamin D level and obtaining adequate vitamin D through diet and/or supplementation may prevent HTN.[15]
A randomized, controlled trial published in 2007 demonstrated that regular consumption of a small amount of dark chocolate has been shown to mildly reduce BP (-2.9 mmHg systolic and -1.9 mmHg diastolic average) in people with stage 1 HTN or prehypertension. The study population did not have other cardiovascular risk factors and were not taking antihypertensive medications. This study compared daily intake (30 kcal, or the equivalent of a Hershey's Kiss) of dark chocolate and white chocolate for 18 weeks. The group receiving white chocolate had no improvement in BP. It is suspected that the polyphenols in the dark chocolate lower BP.[24]
A recent study explored the effects of various milk and cheese products on developing HTN in adults aged 55 years and older living in the Netherlands. It was discovered after 6 years that higher dairy intake was associated with lower rates of HTN. The authors concluded that consumption of low-fat dairy products may prevent HTN in older individuals.[25] Another study conducted in U.S. women aged 45 years and older showed similar results with intake of low-fat dairy products, but not with supplements of calcium or vitamin D.[26]
Lastly, various studies have shown that ownership of a dog or cat lowers a person's BP. Whether this is accomplished through increased exercise or the psychological effects of a human-animal connection is yet to be fully established. Health benefits of pet ownership include BP reductions, a reduction in triglyceride levels, improved exercise habits, decreased feelings of loneliness, and decreased stress levels.[27,28]
Conclusion
A person's way of life can have substantial effects on his or her health, including the risk of developing HTN. Numerous lifestyle risk factors have been implicated in the development of HTN; likewise, several lifestyle modifications effectively lower BP. Alterations in lifestyle are essential to prevention and treatment of HTN and can decrease the need for one or more prescription medications. Lifestyle changes to lower BP can additionally correct obesity, lower cardiovascular risk, decrease insulin resistance, improve drug efficacy, and enhance antihypertensive effect. Greater BP reductions are achieved if two or more lifestyle adjustments are made concurrently. Assisting and motivating patients to make lifestyle changes to lower their BP to goal levels is recommended by the JNC 7 guidelines yet is often underutilized by health care clinicians. It is imperative that pharmacists be knowledgeable in risk factors and treatments for HTN and express interest in having patients reach their BP goals. Studies have proven that involvement of a pharmacist in the treatment of hypertensive patients can result in improved BP control through adoption of lifestyle modifications, proper antihypertensive selection, and better adherence to medications.[2,29]
Top
New Lipid Guidelines Recommend Tighter Control: Management of Hypercholesterolemia
Management of Hypercholesterolemia
From Topics in Advanced Practice Nursing eJournal > Articles
New Lipid Guidelines Recommend Tighter Control
Sandra L. Chase, BS, PharmD, FAPP
Posted: 07/30/2002; Topics in Advanced Practice Nursing eJournal. 2002;2(3) © 2002 Medscape
Abstract and Introduction
Abstract
Coronary heart disease (CHD) is a leading cause of morbidity and mortality, and high blood cholesterol is a major risk factor for CHD. In its third report, the Expert Panel on the Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults of the National Cholesterol Education Program guidelines for screening and management of high blood cholesterol have been updated to further identify and treat patients at risk. Therapeutic lifestyle changes are stressed as therapy for all patients. Pharmacologic therapy is indicated for all people not meeting low-density lipoprotein target goals. Although 3-hydroxy-3-methylglutaryl coenzyme A reductase inhibitors are well tolerated and the most frequently used hypolipidemic agents, a variety of agents can be used, including nicotinic acid derivatives, bile acid sequestrants, and fibric acid derivatives.
Introduction
Coronary heart disease (CHD) includes the clinical conditions of acute myocardial infarction, angina pectoris, and heart failure. It is estimated that CHD affect 12.2 million Americans.[1] In addition, CHD causes more than 466,000 deaths annually in the United States.[2] Approximately 1.1 million Americans had a myocardial infarction in the year 2000, and more than 40% died as a result. Sixteen percent of men and 35% of women will experience a second myocardial infarction within 6 years of the first.[1]
The economic impact of CHD is enormous. In 1999, the direct costs of CHD (costs for hospitalization, nursing home care, physician services, medications, home healthcare) in the United States amounted to $55.2 billion; the indirect costs for lost productivity, morbidity, and mortality were $118.2 billion. Although 88% of persons younger than 65 years are able to return to work after a myocardial infarction, CHD is the leading cause of early, permanent disability in the US workforce.[1] It accounts for 19% of disability allowances paid by the Social Security Administration.[1]
High blood levels of cholesterol (particularly low-density lipoprotein cholesterol [LDL-C]) increase the risk of CHD, and lowering total cholesterol and LDL-C levels reduces this risk.[2] Clinical management of persons without CHD (eg, interventions to prevent the development of or reduce risk factors for CHD) is referred to as primary prevention; treatment of elevated LDL-C levels in patients with a history of CHD (or other atherosclerotic disease associated with lipid accumulation in the blood vessel walls) is considered secondary prevention.[2,3] Thus, it is of critical importance that our patients with lipid disorders be identified and treated appropriately and aggressively to reduce their risk of CHD. Table 1 outlines the risk factors for CHD.
Pathophysiology
Cholesterol Metabolism
Cholesterol is an essential component of cell membranes and a metabolic precursor of bile acids and steroid hormones (eg, adrenocortical and sex hormones).[4,5] It is obtained from the diet and synthesized in the liver, intestinal mucosa, and other cells. The rate-limiting step in cholesterol synthesis involves the enzyme 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase, which converts HMG-CoA to mevalonate.[6] Cholesterol and other lipids (eg, triglycerides, which are made up of free fatty acids and glycerol) are transported in the systemic circulation as a component of lipoproteins.
Lipoproteins are particles composed of (1) a hydrophobic lipid core made up of cholesterol esters and triglycerides and (2) a hydrophilic outer coat made up of phospholipids, flee cholesterol, and apolipoproteins.[5,7] Apolipoproteins are proteins that provide structural stability to lipoproteins, bind with cell receptors, and play a vital role in regulating lipid transport and lipoprotein metabolism.[5,7]
Lipoproteins are classified on the basis of their density as chylomicrons, very low-density lipoproteins (VLDLs), LDLs, intermediate-density lipoproteins (IDLs), and high-density lipoproteins (HDLs). Most of the cholesterol in the serum (60%-70%) is found in LDL particles; HDL particles contain 20% to 30% of the total serum cholesterol; and VLDL particles contain 10% to 15% (as well as most of the triglycerides during fasting conditions).[2] Chylomicrons transport cholesterol and fatty acids from the intestines (ie, dietary cholesterol and that synthesized locally in the mucosa) to the liver.[5]
In the liver, cholesterol and triglycerides are synthesized and incorporated into VLDL particles, which deliver cholesterol to the peripheral tissues when the particles are released into the bloodstream. The triglyceride content of VLDL particles initially is high and decreases progressively as the result of enzyme activity in the bloodstream. This enzyme activity converts the particles sequentially to VLDL remnants, IDL, and LDL.
The LDL particles are small and high in cholesterol content.[5] The LDL receptors on peripheral and hepatic cells bind with apolipoproteins on the surfaces of LDL, resulting in uptake of cholesterol into the cells (ie, clearance of LDL from the bloodstream), where it is subsequently degraded.[5,6] Low intracellular cholesterol concentrations stimulate the synthesis of LDLreceptors, thereby increasing cellular uptake of LDL.[5]
The HDL particles transport cholesterol from peripheral cells to the liver, a process known as reverse cholesterol transport.[5] High HDL levels promote clearance of cholesterol from peripheral tissues.
Lipoprotein(a) (Lp[a])] is similar to LDL, but it contains apolipoprotein(a), a protein similar to plasminogen; Lp(a) is formed when apolipoprotein(a) binds to apolipoprotein B on LDL.[5,7] Apolipoprotein B is the only apolipoprotein on LDL particles, whereas other lipoproteins have multiple apolipoproteins on their surfaces.[5] High levels of LDL-C, triglycerides, apolipoprotein B, and Lp(a) and low levels of HDL and apolipoprotein A-I (an apolipoprotein associated with HDL synthesis) are associated with high risk of CHD.[2,5,7]
Hypercholesterolemia
The cholesterol level in blood is determined by a combination of factors, including inheritance (ie, genetic abnormalities in lipoprotein metabolism), age, and acquired factors (eg, lifestyle factors such as dietary intake of saturated fat and cholesterol, physical activity). Secondary causes of hypercholesterolemia and lipoprotein abnormalities include poorly controlled diabetes mellitus, hypothyroidism, nephrotic syndrome, dysproteinemia, obstructive liver disease, drug therapy (eg, cyclosporine, glucocorticoids), and alcoholism.[7]
Atherosclerosis
High blood levels of cholesterol play a leading role in atherosclerotic lesion formation in the walls of coronary arteries. Atherosclerosis begins with accumulation of lipoproteins (primarily LDL) within the inner layer of the arterial wall, where they no longer come in contact with antioxidants and other constituents in the bloodstream.[8] Chemical modification (particularly oxidation) of lipoproteins leads to a local inflammatory reaction involving macrophages, which ingest oxidized lipoproteins and form foam cells. Accumulation of foam cells contributes to fatty lesion formation. Reverse cholesterol transport out of the tissues mediated by HDL may also occur.
Over time, fatty lesions progress to fibrous plaques. Fissures may develop in a plaque, exposing the underlying tissues to platelets and other constituents of blood. Platelet adhesion, activation, and aggregation lead to thrombus (clot) formation, partially or completely occluding the vessel lumen and causing clinical symptoms of CHD (eg, myocardial ischemia or infarction).[8]
Diagnosis and Classification
It is recommended that a complete fasting lipoprotein profile (as opposed to only total cholesterol and HDL-C) be measured in all adults 20 years and older at least once every 5 years.[2] A fasting lipoprotein profile includes total cholesterol, LDL-C, HDL-C, and triglycerides. Measurement of the LDL-C on initial screening provides more information for risk assessment.
Although LDL-C testing is more precise, advantages of testing for total cholesterol over a complete fasting lipoprotein profile include greater availability of the test, lower cost, and lack of a requirement that the patient fast before the test.[9] As a result, it may not be practical in all situations to have a full fasting profile performed. If the testing opportunity is nonfasting (fasting defined as nothing by mouth with caloric value in the preceding 9 to 12 hours), only total cholesterol and HDL will be usable and should be measured. However, LDL-C may be calculated from the results of these tests in patients with a total triglyceride level of 400 mg/dL or less using the following equation[2]: LDL-C = total cholesterol - (HDL-C + triglycerides)/5.
In patients with total triglyceride levels higher than 400 mg/dL, LDL-C should be measured directly by preparative ultracentrifugation because calculating LDL-C from total cholesterol, total triglyceride, and HDL-C values is inaccurate in this situation. In general, LDL-C levels less than 100 mg/dL are optimal; levels of 100 to 120 mg/dL are near optimal; the range 130 to 159 mg/dL is considered borderline high risk; and 160 mg/dL or greater is high risk.[2]
Cholesterol ratios (total cholesterol/HDL-C and LDL-C /HDL-C) are also strong predictors of CHD.[9] Other measures that are not performed routinely but may provide insight into a patient's risk for CHD include Lp(a) and apolipoproteins B and A-I.
Hypercholesterolemia may be isolated or accompanied by hypertriglyceridemia. If the triglyceride value is greater than or equal to 200 mg/dL, the non-HDL-C level should be assessed: non-HDL-C = total cholesterol - HDL. Identified as a secondary target of therapy by the National Cholesterol Education Program (NCEP) Adult Treatment Panel III (ATP III) in patients with high triglyceride levels, non-HDL levels reflect the sum of LDL and VLDL (triglyceride-rich remnant lipoprotein) levels. Hyperlipoproteinemias are classified according to the scheme in Table 2 .[10]
Management of Hypercholesterolemia
Lifestyle modifications, such as smoking cessation, dietary therapy, and physical activity, with or without antilipemic drug therapy, are used to manage hypercholesterolemia and reduce risk of CHD. According to the NCEP, the intensity of treatment for hypercholesterolemia should be guided by the patient's risk, which depends on the LDL-C level, the number of CHD risk factors, and whether CHD is already present ( Table 3 ).[2] More aggressive interventions are recommended for patients at high risk than for patients at lower risk.[2]
The target LDL-C level is progressively lower as the risk for CHD increases. The measured LDL-C level at which drug therapy should be initiated also is lower asthe risk for CHD increases. For example, drug therapy should be considered for patients with CHD if their measured LDL-C level is 130 mg/dL or higher and the target level is 100 mg/dL or less (ie, a reduction of 23% or more is required). By contrast, drug therapy should be considered for a patient without CHD and with fewer than 2 risk factors if the measured LDL-C level is 190 mg/dL or higher and the target level is less than 160 mg/dL (ie, a reduction of 16% or more is needed).[11]
The American Diabetes Association recommends aggressive treatment of hypercholesterolemia in patients with diabetes mellitus because diabetes increases the risk of CHD 2-fold to 4-fold.[12] Drug therapy should be initiated if the measured LDL-C level is 130 mg/dL or greater in adults with diabetes (100 mg/dL or greater if CHD is also present) and the target LDL-C level is 100 mg/dL or lower.[12]
Dietary Therapy and Physical Activity
The NCEP ATP III recommends a multifaceted lifestyle approach, termed therapeutic lifestyle changes (TLC), to lower LDL levels and reduce risk for CHD.[13] Nonpharmacologic approaches considered effective for lipid management include dietary modifications, weight loss or control, aerobic exercise, moderate alcohol consumption, and smoking cessation. The objectives of dietary therapy are to reduce elevated serum cholesterol levels and maintain good nutrition. The dietary recommendations include reduction of saturated fats to less than 7% of total calories and cholesterol to less than 200 mg/d. Therapeutic options for enhancing LDL lowering include increasing consumption of plant stanols/sterols to 2 g/d and increasing intake of viscous (soluble) fiber to 10 to 25 g/d.
Physical activity is an essential element in managing hypercholesterolemia. In overweight patients, physical activity and dietary therapy promote weight loss. Obesity is not listed as a risk factor for CHD because it acts indirectly through other risk factors, such as diabetes mellitus, hyperlipidemia, and hypertension.[2] Exercise and weight loss in overweight patients may reduce triglyceride levels, blood pressure, risk for diabetes mellitus, and cholesterol levels.[2] Restricting alcohol intake is recommended for patients who are overweight or who have hypertriglyceridemia, because alcohol contributes calories and increases serum triglyceride concentrations in many people.[9]
If after at least 6 months of dietary therapy and exercise the reduction in LDL-C levels is inadequate (or if the LDL-C level rises above the level at which drug therapy is indicated), the addition of drug therapy to dietary therapy should be considered. Drug therapy is not a substitute for dietary therapy.[2] Potential benefit, adverse effects, and costs enter into the decision to use drug therapy.
Drug Therapy
Hypolipidemic agents to treat hypercholesterolemia include bile acid sequestrants (cholestyramine, colestipol, and colesevelam), niacin (nicotinic acid), fibric acid derivatives (fenofibrate, gemfibrozil), and HMG-CoA reductase inhibitors (simvastatin, pravastatin, and atorvastatin, commonly referred to as statins).[6] When selecting among the available hypolipidemic drug therapies, one should take into consideration the patient's lipid profile (ie, the presence of hypertriglyceridemia and hypercholesterolemia, low HDL level), contraindications, potential drug interactions, and cost.
Measurement of LDL-C levels 4 to 6 weeks and 3 months after initiation of antilipemic drug therapy is recommended by the NCEP.[2] The target LDL-C level for drug therapy is the same as that for dietary therapy ( Table 3 ). Adjustments in drug dosagemay be necessary to achieve the target level or to avoid adverse effects. If the response is adequate, checkups should be scheduled at least every 4 months, although measurement of serum total cholesterol concentration suffices at these visits (lipoprotein analysis with LDL-C measurement may take place annually after the 3-month visit).[2]
If the response to the initial drug is inadequate despite adjustment, another drug or a combination of 2 drugs should be tried. Most patients respond to 1 or 2 drugs.[2]
Bile acid sequestrants. The bile acid sequestrants cholestyramine (Questran), colestipol (Colestid), and colesevelam (Welchol) bind to bile acids in the intestinal tract, interrupting enterohepatic circulation and causing the removal of bile acids from the body when the drug is eliminated in the feces.[14] Although hepatic cholesterol synthesis increases to compensate for these losses, the number of LDL receptors on hepatocytes also increases, promoting clearance of LDL from the bloodstream.[6,14] Both VLDL and triglyceride concentrations may increase during bile acid sequestrant therapy, especially if they are elevated before treatment.
Treatment with bile acid sequestrants should be used in patients who have hypercholesterolemia but not hypertriglyceridemia. This group includes patients with polygenic or heterozygous familial hypercholesterolemia and those with the hypercholesterolemic form of familial combined hyperlipidemia.
Although these agents primarily lower levels of LDL by 15% to 30%, they also can modestly increase HDL by 3% to 5%.[15] A modest, usually transient, 5% to 10% increase in triglyceride level also occurs, however, secondary to increased triglyceride production and increased VLDL triglyceride content and size.[16] If triglyceride levels are greater than 250 mg/dL, a variable increase in triglyceride levels is seen. In patients with dysbetalipoproteinemia or baseline triglyceride levels higher than 500 mg/d, a marked increase in triglyceride levels usually will occur, thus contraindicating single-drug therapy with resins.
The ability of both cholestyramine and colestipol to bind bile acids is well documented, but the efficient (95% to 99%) reabsorption of bile acids limits the efficiency of large doses of these drugs. Frequent dosing is thus required to trap a substantial amount of the bile acid pool.
Interactions occur between resins and other substances in the intestine. Interference with the absorption of fat-soluble vitamins, for example, vitamin K, can lead to hypoprothrombinemia. This does not appear detrimental except in patients with significant problems in bile acid metabolism, such as in those with severe liver or small bowel disease. Therefore, multivitamin supplementation is usually not indicated, except in these cases.
Medication taken with or near the time of resin ingestion may be bound and not absorbed. Medications at risk include phenylbutazone, warfarin, thiazide diuretics, propranolol, penicillin G, tetracycline, phenobarbital, thyroxine preparations, and digitalis. The effect on absorption of many other drugs has not been well studied. The current recommendation is that other medications be taken at least 1 hour before or 4 to 6 hours after the bile acid-binding resin. This may limit the use of resins in patients taking multiple drugs concomitantly.
The bile acid sequestrants are not absorbed in the body; therefore, the range of adverse effects is limited. These agents can be used in the treatment of hypercholesterolemia in children and pregnant women, although clear-cut data as to the safety of long-term use in children or the use during pregnancy or lactation are not available. Most complaints about resins relate to the taste, texture, and bulkiness. Colesevelam offers the advantage of being in capsule form; however, as many as 6 must be taken a day. The most frequent adverse effects of the bile acid sequestrants are dose dependent and include nausea, vomiting, heartburn, abdominal pain, belching, bloating, and constipation. These adverse effects might be minimized by a gradual increase in dose.
Nicotinic acid and derivatives. The lipid-lowering capacity of nicotinic acid was shown more than 30 years ago, and it has been used successfully in a variety of hyperlipidemic conditions. Niacin (Niaspan), a nicotinic acid derivative, reduces hepatic synthesis of VLDL and, as a result, reduces LDL formation.[6,14] It lowers LDL-C and triglyceride levels and raises HDL-C levels.[2]
Changes in lipoprotein levels that occur with standard doses of niacin are reductions in plasma triglyceride and total cholesterol levels of 30% to 40% and 15% to 20%, respectively. The LDL levels may be reduced by 20% or more (reductions exceeded only by HMG-CoA reductase inhibitors). A significant rise in HDL-C levels (approximately 15%) is generally seen. However, adverse effects may be more common with the higher doses. The changes in serum triglyceride and HDL-C concentrations that are induced by niacin are curvilinear, whereas the changes in serum LDL-C concentrations are linear. Thus, a daily dose of 1500 to 2000 mg of niacin will substantially change the serum triglyceride and HDL-C concentrations without causing many of the mucocutaneous and hepatic adverse effects seen with higher doses.
This dose is often ideal for patients with familial combined hyperlipidemia. These patients usually need to take a statin as well, and because it is tolerated better, the statin should be given first. The patients may then be more receptive to moderate doses of plain or timed-release nicotinic acid. Higher doses (3000 to 4500 mg/d) may be needed to reduce serum LDL-C concentrations substantially in patients with familial hypercholesterolemia even when statins and a bile acid-binding resin are given concomitantly.
The most common adverse effects from niacin are gastrointestinal upset, loose bowel movements or diarrhea, peripheral vasodilation (flushing of the face and neck), and pruritus.[17] Niacin-induced vasodilation appears to be mediated by prostaglandins (eg, prostacyclin). Healthcare practitioners and pharmacists can counsel patients on ways to minimize the adverse effects of niacin. Flushing may be reduced by pretreatment with a prostaglandin inhibitor, such as aspirin, 325 mg administered 30 minutes before the niacin dose. The aspirin use can often be discontinued after a few days because tachyphylaxis develops in response to the prostaglandin-mediated flush. Patients can also minimize flushing by taking niacin at the end of a meal and by not taking it with hot liquids.
Hepatotoxicity has been reported in patients receiving niacin; it may be dose related (>2000 mg/d) and associated with the use of extended-release preparations.[14] The symptoms and time course of niacin-induced hepatitis are similar to those associated with statins. Timed-release formulations of nicotinic acid are designed to minimize cutaneous flushing. However, the absence of flushing may indicate poor gastrointestinal absorption.[17] Additional drawbacks of such formulations are lesser decreases in serum triglyceride concentrations and lesser increases in serum HDL-C concentrations than are induced with plain nicotinic acid. Healthcare professionals can suggest the timed-release formulations for patients who cannot tolerate plain niacin and can be sure to follow up for evaluation of antilipemic effect and effect on aminotransferase levels.
Less common adverse effects include acanthosis nigricans, vascular-type headaches, orthostatic hypotension (especially in elderly patients), and reversible blurred vision resulting from macular edema. Niacin inhibits the tubular excretion of uric acid, predisposing patients to hyperuricemia and gout. Elevations in plasma glucose levels, attributed to the rebound in fatty acid concentrations that may occur after each dose of niacin, occur in some individuals, leading to glucose intolerance. The elevated free fatty acids may compete with the use of glucose by peripheral tissues.
Fibric acid derivatives. The prototypical fibric acid is clofibrate (which is not used in the United States). Clofibrate and related drugs somewhat resemble short-chain fatty acids and function to increase the oxidation of fatty acids in both liver and muscle. The increase in fatty acid oxidation in the liver is associated with increased formation of ketone bodies (an effect that is not clinically important) and decreased secretion of triglyceride-rich lipoproteins.[17] Fenofibrate (Tricor) and gemfibrozil (Lopid) are marketed in the United States; bezafibrate and ciprofibrate are available in Europe. These drugs, known as fibrates, reduce VLDL synthesis and VLDL cholesterol levels.[6,14] They decrease LDL-C levels in patients with hypercholesterolemia or combined hypercholesterolemia and hypertriglyceridemia and increase HDL-C levels.[14,16]
Therapy with the fibrates results in triglyceride reduction of up to 50%, making these first-line agents in primary hypertriglyceridemia. The LDL-C reduction averages 10% to 15%, but this effect is variable and some patients may in fact have a mild increase in LDL levels secondary to fibrate treatment. If the increase is substantial, a low-dose statin is often added to the regimen. Fenofibrate may lower serum LDL-C concentrations more effectively than does clofibrate or gemfibrozil. However, HDL-C levels increase up to 25%.[18] Thus, the primary indications for fibrate therapy are serum triglyceride concentrations of more than 1000 mg/dL, remnant removal disease, and low serum HDL-C concentrations. However, they may also be useful in patients with combined hyperlipidemia.
Rash and dyspepsia are the most common adverse effects from fenofibrate. Gastrointestinal complaints (eg, abdominal or epigastric pain, dyspepsia) are the most common adverse effects from gemfibrozil. Fibrates may increase the risk of gallstones. They may also potentiate the effects of oral anticoagulants.[14]
HMG-CoA reductase inhibitors. Drugs of the statin class are structurally similar to HMG-CoA, a precursor of cholesterol, and are competitive inhibitors of HMG-CoA reductase, the last regulated step in the synthesis of cholesterol. The HMG-CoA reductase inhibitors inhibit the rate-limiting step in the synthesis of cholesterol, resulting in increases in the number of LDL receptors on hepatocytes and clearance of LDL-C from the bloodstream.[7] These agents reduce hepatic secretion of VLDL and may also increases HDL-C levels. They lower serum LDL-C concentrations by up-regulating LDL receptor activity and reducing the entry of LDL into the circulation. Inhibitors of HMG-CoA reductase also may have antiatherogenic effects unrelated to their lipid-lowering effects (eg, improved function of the endothelial cells that line the inner surface of the arterial wall and decreased platelet thrombus formation).[19]
Given alone for primary or secondary prevention, these drugs can reduce the incidence of CHD by 25% to 60% and reduce the risk of death from any cause by approximately 30%.[17] Therapy with a statin also reduces the risk of angina pectoris and cerebrovascular accidents and decreases the need for coronary artery bypass grafting and angioplasty.[17]
The dose required to lower serum LDL-C concentrations to a similar degree varies substantially among the statins. In addition, the response to increases in the dose is not proportional, because the dose-response relation for all 6 statins is curvilinear. In general, a doubling of the dose decreases serum LDL-C concentrations by an additional 6%. The maximal reduction in serum LDL-C concentrations induced by treatment with a statin ranges from 24% to 60%.[17] Another statin that is currently under review (but not approved for use) at the Food and Drug Administration, rosuvastatin, may have an even greater effect on LDL-C than the ones currently available.
All the statins lower serum triglyceride concentrations, with atorvastatin and simvastatin having the greatest effect. In general, the higher the baseline serum triglyceride concentration, the greater the decrease induced by statin therapy. Statins are a useful adjunct in the treatment of moderate hypertriglyceridemia, but they are often insufficient as monotherapy.
The HMG-COA reductase inhibitors areconsidered to be the most effectivehypolipidemic agents available for lowering LDL-C levels.[20] Statins are useful in treating most of the major types of hyperlipidemia. The classic indication is heterozygous familial or polygenic hypercholesterolemia, in which the LDL receptor activity is reduced. Statins increase LDL receptor activity by inhibiting the synthesis of cholesterol. They also reduce the formation of apolipoprotein B-containing lipoproteins and impede their entry into the circulation and can reduce high serum concentrations of triglycerides and remnant lipoproteins. As a result, statin therapy is also indicated in patients with combined or familial combined hyperlipidemia, remnant removal disease, and the hyperlipidemia of diabetes and renal failure.[20]
Patients should be counseled regarding the proper administration and adverse effects of the HMG-CoA reductase inhibitors. Since lovastatin (Mevacor) is better absorbed when taken with food, it should be taken with meals. However, pravastatin is best taken on an empty stomach or at bedtime. Food has less of an effect on the absorption of the other statins. Because the rate of endogenous cholesterol synthesis is higher at night, all the statins are best given in the evening.[14]
Adverse effects from HMG-CoA reductase inhibitors are uncommon. The most common adverse effects of statins are gastrointestinal upset, muscle aches, and hepatitis. Liver enzyme elevations have been reported.[14] Rarely, patients may complain of rash, peripheral neuropathy, insomnia, bad or vivid dreams, and difficulty sleeping or concentrating. For patients who have central nervous system adverse effects, a statin with no penetration of the central nervous system, such as pravastatin, can be tried.
Myopathy (muscle aching or weakness accompanied by increases in creatine kinase) and rhabdomyolysis (muscle breakdown sometimes accompanied by myoglobinuria and acute renal failure) are rare complications of HMG-CoA reductase inhibitor therapy. The risk of myopathy and rhabdomyolysis is increased when HMG-CoA reductase inhibitors are used with certain other drugs, including other hypolipidemic agents (specifically, fibric acid derivatives and niacin), azole antifungal agents, erythromycin, and immunosuppressants (eg, cyclosporine).[7,14] The mechanism of these interactions is unknown. The HMG-CoA reductase inhibitors are contraindicated in pregnancy (US Food and Drug Administration pregnancy category X).
Pharmacokinetic differences among individual HMG-CoA reductase inhibitors may translate into differences in their propensity to interact with other drugs, although comparative studies have not been performed. Most HMG-CoA reductase inhibitors (pravastatin is an exception) are metabolized by hepatic cytochrome P450 enzymes (particularly the 3A4 isoenzyme) and can interact with drugs that inhibit these enzymes, resulting in accumulation of the HMG-CoA reductase inhibitor.[21,22] All of the HMG-CoA reductase inhibitors are highly protein bound except for pravastatin, which is roughly 50% bound and possibly less likely to interfere with other drugs that are highly protein bound.[6]
Other Therapies
Dietary supplementation with soluble fiber, such as psyllium husk, oat bran, guar gum and pectin, and fruit and vegetable fibers, lowers serum LDL-C concentrations by 5% to 10%.[17] Sitostanol, a plant sterol incorporated into a margarine-like spread (Take Control), inhibits gastrointestinal absorption of cholesterol. Small amounts of plant stanols are also found in soybeans, wheat, and rice. The n-3 fatty acids, also known as omega-3 fatty acids, can lower serum triglyceride concentrations by up to 30% at a daily dose of 3 g and by about 50% at a daily dose of 9 g.[17] Eating 9 to 12 oz of salmon a day supplies enough omega-3 fatty acid; however, most people obtain the dosage through fish oil supplements.
In postmenopausal women, oral estrogen therapy can lower serum LDL-C concentrations by approximately 10% and raise serum HDL-C concentrations by approximately 15%. However, the risk of venous thrombosis doubles or triples, and there is no overall reduction in the risk of recurrence of coronary disease among women.[23] Women with serum triglyceride concentrations above 300 mg/dL may be treated with transdermal estrogen to aid in lowering triglyceride levels. Rarely, an anabolic steroid such as oxandrolone (Oxandrin) or stanozolol (Winstrol) is used to reduce the hepatic secretion of triglycerides.[17]
Advanced Practice Nurse and Pharmacist Roles
Cardiovascular disease accounts for nearly 50% of all deaths in the United States. Clinical trials and pathophysiologic evidence support the use of aggressive therapy in patients with arteriosclerotic vascular disease and in those with several risk factors for the disease. Pharmacists and advanced practice nurses can have a large impact on the health of their patients by conducting cholesterol screening programs and obtaining patient histories to determine if the patient is at risk for CHD.
Patient education is also a vital component of the entire healthcare team for patients with hypercholesterolemia. Patients should be counseled on the role of dietary therapy, exercise, and drug therapy. Healthcare practitioners must consider hypolipidemic drug therapy to achieve the target LDL-C goal when necessary. Conscientious attention to therapeutic lifestyle changes and pharmaceutical care of patients with lipid disorders will improve patient adherence to the treatment plan and the ultimate patient outcomes.
Disclosure
Sandra Chase, BS, PharmD, has disclosed that she has received grants for educational activities with Astra Zeneca, Merck Co Inc, GlaxoSmithKline, Bristol Myers Squibb, Scios Inc, and Millennium Pharmaceuticals, Inc. She discusses the investigational product rosuvastatin in this article.
Top
From The Medscape Journal of Medicine > Clinical Nutrition & Obesity
Fructose -- How Worried Should We Be?
George A. Bray, MD
Posted: 07/09/2008
The article by Vos and associates[1] in this issue of The Medscape Journal of Medicine focuses our attention once again on the amount of fructose that is consumed by the American public. Fructose in our diet comes from 3 main sources: sucrose (common table sugar), high-fructose corn syrup (HFCS) made from corn starch, and fruit. In fruit, fructose serves as a signal for sweetness and good nutrition. For our ancestors, foods with a sweet taste were likely to be "healthy" and to have other important nutrients in them. This quantity of fructose was small in comparison to today's intake. In contrast to the role of fructose in fruits, fructose in other foods serves as a source of sweetness, often without much, if anything, in the way of other nutrients.
All evidence suggests that fructose ingestion has been rising steadily for a long time[2,3] ( Table ), but analyses have been hampered in part by the lack of good data. Vos and her colleagues make an important step in remedying this lack of data. They obtained information from one of the recent surveys of the eating habits of Americans, called the Health and Nutrition Examination Survey, conducted from 1988 to 1994. It is clear from the analysis that Americans are getting a lot of fructose in their diet. Mean daily consumption of fructose was 54.7 g/d, with a range of 38.4 to 72.8 g/d, and accounted for 10.2% of total daily caloric intake. Consumption was highest among adolescents (12- to 18-year-olds), who consumed 72.8 g/d, or more than 12% of their total calories from fructose. One fourth of this group consumed at least 15% of calories from fructose. The largest source of fructose was sugar-sweetened beverages (30%), followed by grains (22%) and fruit/fruit juice (19%).
So what difference does it make if children and adolescents get these large amounts of fructose? First, when fructose comes from sugar or HFCS-sweetened beverages (50% of sugar is fructose and 55% of HFCS is fructose), they get no other nutrients. This contrasts with the case when fructose is obtained from fruit with its supply of natural nutrients. Thus, our children and adolescents are being short-changed on nutrients when they drink calorically sweetened soft drinks and fruit drinks that have almost no fruit in them. In addition, the more soft drinks youngsters consume, the less milk they consume,[4] which shortchanges them again on the calcium and vitamin D that are so essential for making strong bones. Third, a substantial body of data suggests that calories in calorically sweetened beverages are not perceived by the body in the same way as those in solid food.[4,5] Calories in soft drinks appear to be "add-on" calories to the other foods in the diet rather than suppressing intake of other foods by the amount of calories in the soft drinks.
The current epidemic of obesity could be explained by the consumption of an extra 20-ounce soft drink each day. In addition to the calories these beverages contain, they are a major source of the fructose. A growing number of studies suggest that fructose intake, particularly when accompanied by fat, may be unhealthy. Sugar's attribution as "pure, white and deadly," by Professor Yudkin in 1986, may be partly right. It is the fructose part of the sucrose (table sugar) molecule and the fructose from HFCS that best fit the title of his book. HFCS is a visible marker for highly refined foods -- the kind of food I want to avoid in my diet. The conclusions I have reached here will not make the caloric sweetener industry happy. As Yudkin said 25 years ago,[2] "I suppose it is natural for the vast and powerful sugar interests to seek to protect themselves, since in the wealthier countries sugar makes a greater contribution to our diets, measured in calories, than does meat or bread or any other single commodity." One needs to evaluate these financial interests in terms of their public health implications. This will not be an easy task.
Fructose consumption, either from beverages or food, may have an additional detrimental effect. In a study from Switzerland, dietary fructose was found to predict an increased level of low-density lipoprotein cholesterol in children.[6] Fructose, unlike other sugars, increases serum uric acid levels. Nakagawa and colleagues [7] proposed that this happens when fructose is metabolized in the liver, its major organ for metabolism. Adenosine triphosphate (ATP) is used by the enzyme phosphofructokinase to phosphorylate fructose to fructose-1-phosphate. The adenosine-5’-diphosphate that is thus formed can be further broken down to adenosine-5’-monophosphate, then to inosine 5'-phosphate, and finally to uric acid. Thus, the metabolism of fructose in the liver leads to the production of uric acid. These authors proposed that the high levels of uric acid could set the stage for advancing cardiovascular disease by reducing the availability of nitric oxide, which is crucial for maintaining normal blood pressure and for maintaining normal function of blood vessel walls (endothelium).[7] If this hypothesis is borne out, it will provide another reason that nature preferred glucose over fructose as a substrate for metabolism during the evolutionary process.
Soft drink consumption has been linked to development of cardiometabolic risk factors and the metabolic syndrome in participants in the Framingham Study.[8] Individuals consuming at least 1 soft drink/d had a higher prevalence of the metabolic syndrome (odds ratio, 1.48; 95% CI, 1.30-1.69) and an increased risk for the metabolic syndrome over 4 years of follow-up. The most recent relationship shows that fructose intake is directly related to risk for gout in men.[9]
It is amazing to me that many of our public schools have resorted to financial contracts with beverage companies to make calorie-containing soft drinks that have little nutritional value available on school premises. How we can put the children who are susceptible to obesity at risk by this strategy has perplexed me for years. Maybe it is time for the public to worry about what fructose may be doing to their children and themselves.
Reader Comments on: Fructose -- How Worried Should We Be?
See reader comments on this article and provide your own.
Readers are encouraged to respond to the author at George.Bray@pbrc.edu or to George Lundberg, MD, Editor in Chief of The Medscape Journal of Medicine, for the editor's eyes only or for possible publication as an actual Letter in the Medscape Journal via email: glundberg@medscape.net
Top
From Morbidity & Mortality Weekly Report
Prevalence of Obesity Among Adults with Arthritis
United States, 2003-2009
Jennifer M. Hootman, PhD; Charles G. Helmick, MD; Casey J. Hannan, MPH; Liping Pan, MD
Posted: 06/10/2011; Morbidity & Mortality Weekly Report. 2011;60(16):509-513. © 2011 Centers for Disease Control and Prevention (CDC)
Abstract and Introduction
Introduction
Obesity and arthritis are critical public health problems with high prevalences and medical costs. In the United States, an estimated 72.5 million adults aged ?20 years are obese, and 50 million adults have arthritis. Medical costs are estimated at $147 billion for obesity and $128 billion for arthritis each year.[1–3] Obesity is common among persons with arthritis[2] and is a modifiable risk factor associated with progression of arthritis, activity limitation, disability, reduced quality-of-life, total joint replacement, and poor clinical outcomes after joint replacement.[4,5] To assess obesity prevalence among adults with doctor-diagnosed arthritis, CDC analyzed data from the Behavioral Risk Factor Surveillance System (BRFSS) for the period 2003–2009. This report summarizes the results of that analysis, which determined that, among adults with arthritis, 1) obesity prevalence, on average, was 54% higher, compared with adults without arthritis, 2) obesity prevalence varied widely by state (2009 range: 26.9% in Colorado to 43.5% in Louisiana), 3) obesity prevalence increased significantly from 2003 to 2009 in 14 states and Puerto Rico and decreased in the District of Columbia (DC), and 4) the number of U.S. states with age-adjusted obesity prevalence ?30.0% increased from 38 (including DC) in 2003 to 48 in 2009. Through efforts to prevent, screen, and treat obesity in adults, clinicians and public health practitioners can collaborate to reduce the impact of obesity on U.S. adults with arthritis.
BRFSS* is an annual, random-digit–dialed telephone survey of adults aged ?18 years conducted in all 50 states, DC, Guam, Puerto Rico, and the U.S. Virgin Islands.* Arthritis and obesity prevalence data are collected in odd numbered years. For this analysis, the total survey participants were as follows: 264,864 in 2003; 356,112 in 2005; 430,912 in 2007; and 432,607 in 2009. Data from those 4 years for the 50 states and DC were used to assess median obesity prevalence among adults with and without arthritis and to produce obesity prevalence maps. Data from 2003 and 2009 were used to assess changes in obesity prevalence among adults with arthritis by state/area. For 2003, 2005, 2007, and 2009 respectively, median Council of American Survey and Research Organizations (CASRO) response rates were 53.2%, 51.1%, 50.6%, and 52.5%; median CASRO cooperation rates were 74.8%, 75.1%, 72.1%, and 75.0%, respectively.†
Respondents were defined as having arthritis if they responded "yes" to the question "Have you ever been told by a doctor or other health professional that you have some form of arthritis, rheumatoid arthritis, gout, lupus, or fibromyalgia?" Body mass index (weight [kg]/height [m2]) was calculated from self-reported weight and height. Obesity was defined as a body mass index ?30.0. Respondents reporting body weight ?500 pounds or height ?7 feet or <3 feet were excluded.[1] Unadjusted, weighted obesity prevalence and 95% confidence intervals for each state/area were calculated using sampling weights, which take into account the complex sample design, nonresponse, and noncoverage, by state/area; unadjusted estimates were calculated to enable states to use these data in program planning and awareness efforts. Statistical significance of percentage changes in unadjusted obesity prevalence by state/area was determined by t-test (p<0.05). In addition, state-specific obesity prevalence estimates among adults with arthritis were age-adjusted to the 2000 U.S. standard population.§
For each of the 4 years analyzed, unadjusted median obesity prevalence for the 50 states and DC was significantly higher among adults with arthritis than adults without arthritis. On average for the 4 years, unadjusted state median obesity prevalence among adults with arthritis was 54% higher (range: 49.2%–60.5%) than among adults without arthritis (Figure 1).
Figure 1.
Median unadjusted, weighted prevalence of obesity* among adults with and without arthritis — Behavioral Risk Factor Surveillance System, 50 states and District of Columbia, 2003, 2005, 2007, and 2009
* Body mass index (weight [kg]/height [m2]) ?30.0.
In 2003, unadjusted median state (including DC) obesity prevalence among adults with arthritis was 33.2%; prevalence ranged from 25.1% in Colorado to 40.1% in Ohio (Table). In 2009, unadjusted median state obesity prevalence among adults with arthritis was 35.2%; prevalence ranged from 26.9% in Colorado to 43.5% in Louisiana. From 2003 to 2009, the percentage change in prevalence ranged from -19.2% in DC to 26.2% in Wisconsin. From 2003 to 2009, unadjusted obesity prevalence among adults with arthritis increased significantly in 14 states and Puerto Rico and decreased significantly in DC (Table).
In 2003, a total of 37 states and DC had an age-adjusted obesity prevalence among adults with arthritis ?30.0% (including two states with prevalence ?40.0%) (Figure 2). From 2003 to 2009, the number of states with obesity prevalence ?30.0% increased each survey year: 42 states in 2005 (zero states ?40.0%), 45 states and DC in 2007 (seven states >40.0%), and 48 states in 2009 (12 states ?40.0%) (Figure 2).
Figure 2.
Age-adjusted, weighted percentage of adults with arthritis who were categorized as obese* — Behavioral Risk Factor Surveillance System, 50 states and District of Columbia, 2003, 2005, 2007, and 2009
** Body mass index (weight [kg]/height [m2]) ?30.0.
* Additional information available at http://www.cdc.gov/brfss/technical_infodata/surveydata.htm.
† Response rates are defined as the percentage of completed interviews among all eligible persons. Cooperation rates are defined as the percentage of completed interviews among all eligible persons who actually were contacted.
§ Additional information available at http://www.cdc.gov/nchs/data/statnt/statnt20.pdf.
Editorial Note
The findings in this report indicate that, among adults with arthritis in the United States, obesity prevalence was higher than among adults without arthritis and increased significantly in 15 states/areas from 2003 to 2009. In 2009, age-adjusted obesity prevalence among adults with arthritis was ?30% in 48 states; obesity prevalance among adults without arthritis was ?30% in only two states (CDC, unpublished data, 2011).
Because of the complex relationships between obesity, joint pain, function, and physical activity, adults with arthritis have difficulty maintaining and losing weight.[4] Obesity is an independent risk factor for severe pain, reduced physical function, and disability among adults with arthritis, which might be related to both the increased mechanical stress caused by extra weight on the joints as well as inflammatory effects of elevated cytokines and adipokines that affect cartilage degradation.[4] Obesity also can impair the ability to be physically active, a key self-management and weight loss and maintenance strategy that not only can improve pain and function among adults with arthritis, but also contribute to the energy expenditure needed to lose or maintain weight.[4]
Even small amounts of weight loss (e.g., 10–12 pounds) can have important benefits for persons with arthritis.[4] Randomized controlled interventions of diet, exercise, and diet plus exercise among overweight and obese adults with osteoarthritis have reduced body weight by approximately 5%, improving symptoms and functioning, and preventing short-term disability.[4] Intentional weight loss among obese adults with osteoarthritis might reduce the risk for early mortality by nearly 50%.[6] Reducing obesity prevalence to approximately that observed in 2000 in this population might prevent 111,206 total knee replacements and increase life expectancy by an estimated 7.8 million quality-adjusted years.[7]
For health-care providers, counseling patients with arthritis to lose weight and be more physically active has been shown to correlate strongly with healthy behaviors such as attempts to lose weight.[8] However, provider counseling for weight loss and physical activity for adults with arthritis is below the Healthy People 2010 target[9] and represents an effective but underused opportunity to improve the health of adults with arthritis. Community-based efforts to reduce or maintain weight recommended for adults by the Guide to Community Preventive Services include technology-supported coaching or counseling interventions as well as worksite strategies (e.g., policies to improve access to healthy foods and opportunities to be physically active).¶ U.S. Preventive Services Task Force clinical recommendations include screening and intensive counseling (one or more sessions per month for at least 3 months), plus behavioral interventions for all obese adults.** Creating linkages between the health-care system and community-based obesity prevention and treatment programs is a potential strategy to address obesity among adults with arthritis.
The findings in this report are subject to at least four limitations. First, all BRFSS information is self-reported and subject to recall bias. In a study of 2001–2006 data, weight was found to be underestimated, especially by women, and height was found to be overestimated by both men and women,[10] and these tendencies might affect BRFSS results. Second, single-year estimates of obesity prevalence among adults with arthritis for individual states might be imprecise because of small sample sizes that result from year-to-year differences in survey execution, budgetary constraints, and natural disasters. All estimates in this report meet minimum reliability standards (relative standard errors <30.0%); however, some estimates with wide confidence intervals are less precise. Third, BRFSS does not include persons residing in institutions and, during 2003–2009, did not include households without a landline telephone. Finally, the case-finding question in this analysis covers a range of conditions (i.e., some form of arthritis, rheumatoid arthritis, gout, lupus, or fibromyalgia), which might have different relationships to obesity. Because of the survey design, separate analyses by condition type could not be performed.
Approximately 22% of U.S. adults have arthritis,[2] and a disproportionate number of those persons are categorized as obese. Efforts are needed to increase access to and availability of effective services and programs to manage both chronic conditions. A broad approach to reducing obesity, as outlined in the Surgeon General's Vision for a Healthy and Fit Nation 2010, †† includes addressing both diet and physical activity, leveraging multiple sectors (e.g., health care, communities, and work sites), and utilizing various strategies (e.g., individual behavior, environment, and policy changes). Such an approach might help adults with both conditions increase healthy behaviors that can lessen the impact of obesity and arthritis and improve their overall quality of life.
¶ Additional information available at http://www.thecommunityguide.org/obesity/communitysettings.html.
** Additional information available at http://www.uspreventiveservicestaskforce.org/3rduspstf/obesity/obesrr.pdf.
†† Available at http://www.surgeongeneral.gov/library/obesityvision/obesityvision2010.pdf.
From Southern Medical Journal
Nephrolithiasis: Evaluation and Management
Zachary Z. Brener, MD; James F. Winchester, MD; Hertzel Salman, MD; Michael Bergman, MD
Posted: 02/28/2011; South Med J. 2011;104(2):133-139. © 2011 Lippincott Williams & Wilkins
Abstract and Introduction
Abstract
Nephrolithiasis is a major cause of morbidity involving the urinary tract. The prevalence of this disease in the United States has increased from 3.8% in the 1970s to 5.2% in the 1990s. There were nearly two million physician-office visits for nephrolithiasis in the year 2000, with estimated annual costs totaling $2 billion. New information has become available on the clinical presentation, epidemiologic risk factors, evaluative approach, and outcome of various therapeutic strategies. In this report, we will review the epidemiology and mechanisms of kidney-stone formation and outline management aimed at preventing recurrences. Improved awareness and education in both the general population and among health-care providers about these modifiable risk factors has the potential to improve general health and decrease morbidity and mortality secondary to renal-stone disease.
Introduction
Nephrolithiasis—from the Greek word nephros, meaning "kidney" and lithos, meaning "stone"—refers to the condition of having stones (calculi) in the kidney or collecting system. Nephrolithiasis is a world-wide problem, with prevalence rates of 1 to 5% in Asia, 5 to 9% in Europe, 13% in North America, and 20% in Saudi Arabia.[1] The composition of stones and their location may also significantly differ in different countries. New information has become available on the clinical presentation, epidemiologic risk factors, evaluative approach, and outcome of various therapeutic strategies. In this report, we will review the complex pathophysiology of the various types of kidney stones, and explore the current evidence for their epidemiology, prevention, and treatment.
Epidemiology
Nephrolithiasis is a common presentation, with an annual incidence of 7 to 12 cases per 10,000 persons, and a lifetime prevalence of 10% white men and 5% women in the United States.[2] Recent studies using the National Health and Nutrition Examination Survey (NHANES) have suggested that the prevalence of stone disease increased from 3.8 to 5.2% nationwide.[3] This increase was especially observed in whites, in men more than women, and in patients of greater age. Recent data has suggested that diet and lifestyle may partially account for changes in stone-disease prevalence and for the apparent increase in nephrolithiasis in women.[4] Black women excrete less urinary calcium and have a higher urinary pH than white women,[5] and therefore have a lower incidence both of nephrolithiasis and of osteoporosis. Rate of recurrence for kidney stones is also much higher in males. Once an individual has had a kidney stone, he is more likely to have another. The recurrence rate is 10 to 20% within 1 to 2 years, 35% within 5 years, and 60% within 10 years if left untreated.
The Southeast and Southwest areas of the United States typically have a higher prevalence of nephrolithiasis, due to the impact of hot weather and hydration status on stone formation.[2] Summer has been associated with greater nephrolithiasis incidence, related to sunlight-induced increase of vitamin-D production and calcium absorption.
Clinical Presentation
Most patients present with moderate to severe colic, caused by a stone entering the ureter. Stones in the proximal (upper) ureter cause pain in the flank or anterior upper abdomen.[6] As the stone moves further down the ureter toward the bladder, the pain often radiates in the groin and in the ipsilateral testicle or labia. Less often, patients present with persistent urinary-tract infection (UTI), or painless hematuria. However, the absence of hematuria does not exclude urolithiasis.[6] Differential diagnosis in a patient with symptoms suggesting renal colic include: musculoskeletal pain, herpes zoster, diverticulitis, cholecystitis, pyelonephritis, renal infarct, renal papillary necrosis, appendicitis, and gynecologic disorders.[6]
Types of Stones
Nearly 90% of stones in men and 70% in women contain calcium, most commonly as calcium oxalate.[7,8] Other types of stones, such as cystine, pure uric acid, and struvite, are much less common (Table 1).[7,8]
Pathophysiology
Nephrolithiasis starts with urinary supersaturation.[7,8] Saturation is the point at which crystals (free ions) in solution are in equilibrium with the salt of that crystal in solution. Increased urinary ion excretion and decreased urine volume will both increase free-ion activity and favor stone formation and growth. High urine flow rates (more than 2 liters in 24 hours) will reduce supersaturation and might prevent calculi formation. That is true for all stone types, and it is an effective therapy for stones. Uric-acid stone formation is a pH-mediated phenomenon rather than a uric-acid excretion problem.[7,8] People with normal uric-acid excretions and normal urine-flow rates will be highly supersaturated with uric acid when urine pH is very low (less than 5.5). The key phase is nucleation. Nucleation is usually heterogeneous with a mixture of substances, such as uric acid, and this could form a nidus for nucleation acetate. Nucleation leads to crystal growth, and then to crystal aggregation.
Despite similar degrees of supersaturation, some people form stones, whereas others do not. This may be due to the presence of promoters and inhibitors of crystallization. Promoters include hydrogen ions, sodium, and magnesium. Crystal inhibitors are protein-crystal inhibitors, such as uropontin and nephrocalcin, glycosaminoglycans, citrate, and pyrophosphate. At the present time, citrate is the only naturally-occurring inhibitor that is routinely measured in urine.[9] Crystals are most likely retained in the sites of prior injury, such as renal papillae, or in gravity-dependent locations, such as the lower-pole calices.
Calcium-based Stones
Approximately 70% of all kidney stones contain calcium and are composed of calcium oxalate (26%) or calcium phosphate (7%), or both (35%). Once a patient forms a calcium-containing stone, another stone will generally form in less than seven years, with a decreasing time interval to subsequent stone events. Calcium stones may form in urine that is supersaturated secondary to excess calcium, oxalate, or uric acid excretion, or they may form without a discernible cause. Calcium-based stones have a multifactorial etiology. Several risk factors for calcium-based stones have been identified (Table 1).[7,8]
Most patients with calcium-oxalate stones have hypercalciuria, defined as 24-hour urinary-calcium excretion >300 mg in men, >250 mg in women, or >4 mg/kg in men or women. Hypercalciuria can occur in primary hyperparathyroidism, sarcoidosis, vitamin D excess, glucocorticoid excess, renal-tubular acidosis, hyperthyroidism, malignant neoplasms, loop diuretics, or idiopathic, which is the most common cause of hypercalciuria. Some patients with idiopathic hypercalciuria have a strong family history, and genetic basis for the disease. Pak and coworkers advocate subdividing individuals with hypercalciuria into three categories (1): absorptive (increased gastrointestinal absorption of ingested calcium), poorly responsive to dietary modifications, association with elevated serum calcium and vitamin D, and slightly decreased parathyroid hormone (PTH), (2) resorptive (increased bone resorption caused by hyperparathyroidism), and(3) renal (increased urinary excretion of filtered calcium due to kidney defect), associated with mild hypocalcemia and secondary hyperparathyroidism, which occurred in 5–10% of kidney formers.[10] Patients with severe absorptive and resorptive hypercalciuria are advised to avoid excessive calcium intake (more than 2 g per day). Thiazide diuretics are the mainstay of therapy for all types of idiopathic hypercalciuria.[10] Increased urinary oxalate may result from either increased gastrointestinal absorption, due to high dietary- oxalate intake or increased fractional-oxalate absorption, or increased endogenous production. Crohn disease and other malabsorptive states are associated with increased urinary-oxalate excretion. With fat malabsorption, calcium is bound in the small bowel to free fatty acids, leading to an increased amount of unbound oxalate, which is then available for absorption in the colon.
Hypocitraturia (24-hour urinary-citrate excretion <434 mg in men and <500 mg in women) plays an important role in inhibiting calcium crystals.[11] Hypocitraturia is typically seen in conditions that cause chronic metabolic acidosis, such as inflammatory-bowel disease (IBD) and renal tubular acidosis (RTA), all of which are associated with increased occurrence of nephrolithiasis.
Diet plays an important role in the pathogenesis of calcium-based stones. Recent epidemiologic studies support the beneficial role of the normally-recommended levels of dietary calcium.[12–14] Evidence suggests that dietary calcium inhibits the absorption of oxalate in the gut, reducing urinary oxalate excretion. In a study about the effect of taking calcium supplements with meals compared to at bedtime, urinary excretion of calcium was significantly elevated when taking calcium supplement both with meals and at bedtime.[15] Urinary-oxalate excretion, however, was decreased significantly when calcium supplement was taken with meals only. The authors conclude that if calcium supplements are taken, they should be consumed only with meals if reduction in the risk of stone formation is a goal of therapy.
High-protein, low-carbohydrate diets for weight reduction deliver a marked acid load to the kidney, increase the risk for stone formation, decrease calcium balance, and may increase the risk for bone loss.[16]
Uric-acid Stones
Uric-acid stones occur especially in patients with low urine pH (<6.0) and with hyperuricosuria.[17] Using data from a cohort of 51,529 patients, health-care professionals confirmed the independent association between gout and incident kidney stones. The tendency to form uric-acid stones is reported to be increasing in patients with metabolic syndrome, which could include diabetes mellitus, hypertension, obesity, and hypertriglyceridemia. This may be a result of the defect in ammonia production by the kidney as a result of insulin resistance.[18]
Cystine Stones
Cystinuria is a relatively common autosomal-recessive gastrointestinal and renal-transport disorder of four amino acids, namely cystine, ornithine, arginine, and lysine.[19] Cystine is insoluble in normally-acidic urine and thus precipitates into stones.
Struvite Stones
Struvite stones are the result of chronic upper-urinary infection with urease-producing bacteria including Proteus spp, Haemophilus spp, Klebsiella spp, and Ureaplasma urealyticum.[20] The hydrolysis of urea results in ammonia and persistently alkaline urine, which further promotes the formation of stones composed of magnesium ammonium phosphate, also known as struvite.
Struvite stones are often branched ("staghorn" stones), and occur more often in women and in patients who have chronic urinary obstruction.[20]
Evaluation
History and Physical Examination
A detailed history should include the total number of stones, any evidence of residual stones, the number and types of procedures, previous preventive treatments, family history, related medical illnesses (malabsorptive conditions, Crohn disease, colectomy, sarcoidosis, hyperparathyroidism, RTA, recurrent UTI's, neoplasm), diet (volume intake, relative protein intake, high-oxalate foods, salt and calcium intake), and medications (acetazolamide, salicylic acid, acyclovir, indinavir, methyldopa, triamterene).
A physical examination may reveal evidence of bone loss and subcutaneous calcifications.
Radiologic Evaluation
Helical computed tomography (CT) without contrast is the preferred imaging study in patients with suspected nephrolithiasis. This is because helical CT requires no radiocontrast material and shows the distal ureters; it also may detect radiolucent stones (uric-acid stones), small stones (1 to 2 mm), and renal disorders other than stones, including hydronephrosis and intra-abdominal disorders. In a study of 100 patients presenting to an emergency department with flank pain, helical CT had a sensitivity of 98% and a specificity of 100% for the diagnosis of ureteral stones.[21]
Compared with helical CT, which is the gold standard, ultrasonography (US) has a sensitivity of 24% and a specificity of 90%.[22] US can only image the kidney and the proximal ureter, and may also miss stones smaller than 3 mm in diameter. However, US is preferred in pregnant women with suspected calculi to minimize radiation exposure.[23] Due to the high rate of false-negative results, if nephrolithiasis is not confirmed in pregnant women, and if symptoms suggestive of renal calculi persist, single-shot intravenous pyelography (IVP) should be performed.[23] Radiography (kidney-ureter-bladder view) is inadequate for diagnosis and it provides no information about possible obstruction. IVP has few advantages, exposes the patient to the risk of radiocontrast, and gives less information than noncontrast CT.
Laboratory (Metabolic) Evaluation
Retrieving the stone for chemical analysis is an essential part of the evaluation, because treatment recommendations vary by stone type. The diagnostic evaluation of a first stone includes a routine chemistry panel (electrolytes, creatinine, calcium, and uric acid), urinalysis, and culture. If the patient has high serum calcium or high urine calcium, then a parathyroid hormone level should be measured. There is lack of agreement on the appropriate evaluation after the first kidney stone, although such an evaluation appears to be cost effective; single-stone formers have high recurrence rates and the same incidence of metabolic derangements as patients with recurrent stones. Some experts advocate proceeding with a more extensive evaluation only in some circumstances, such as after the second stone, in patients under age 20, or in patients with related medical illnesses. The decision to proceed with a metabolic evaluation in a single-stone former should depend on the patient's willingness to make life-style modifications to prevent recurrent stone formation.[10]
The cornerstone of the evaluation is the 24-hour urine collection. Two consecutive 24-hour urine collections should be done while the patient follows his or her usual diet. Because individuals frequently alter their dietary habits immediately after an episode of renal colic, a patient should wait at least six weeks before performing 24-hour urine collections. Two collections are needed because of substantial day-to-day variability in the values; this method gives about 92% sensitivity.[7,8] The variables that should be measured in the 24-hour urine collections are total volume, calcium, oxalate, citrate, uric acid, sodium, potassium, phosphorus, pH, and creatinine. Cystine and magnesium can be also measured, depending on the clinical situation. Collections need to be sent to a reference laboratory that specializes in kidney-stone evaluations. Some laboratories calculate the relative supersaturation of the urine factors, which can be used to monitor the impact of therapy.
Nephrolithiasis: Evaluation and Management: Management
From Southern Medical Journal
Nephrolithiasis: Evaluation and Management
Zachary Z. Brener, MD; James F. Winchester, MD; Hertzel Salman, MD; Michael Bergman, MD
Posted: 02/28/2011; South Med J. 2011;104(2):133-139. © 2011 Lippincott Williams & Wilkins
Abstract and Introduction
Abstract
Nephrolithiasis is a major cause of morbidity involving the urinary tract. The prevalence of this disease in the United States has increased from 3.8% in the 1970s to 5.2% in the 1990s. There were nearly two million physician-office visits for nephrolithiasis in the year 2000, with estimated annual costs totaling $2 billion. New information has become available on the clinical presentation, epidemiologic risk factors, evaluative approach, and outcome of various therapeutic strategies. In this report, we will review the epidemiology and mechanisms of kidney-stone formation and outline management aimed at preventing recurrences. Improved awareness and education in both the general population and among health-care providers about these modifiable risk factors has the potential to improve general health and decrease morbidity and mortality secondary to renal-stone disease.
Introduction
Nephrolithiasis—from the Greek word nephros, meaning "kidney" and lithos, meaning "stone"—refers to the condition of having stones (calculi) in the kidney or collecting system. Nephrolithiasis is a world-wide problem, with prevalence rates of 1 to 5% in Asia, 5 to 9% in Europe, 13% in North America, and 20% in Saudi Arabia.[1] The composition of stones and their location may also significantly differ in different countries. New information has become available on the clinical presentation, epidemiologic risk factors, evaluative approach, and outcome of various therapeutic strategies. In this report, we will review the complex pathophysiology of the various types of kidney stones, and explore the current evidence for their epidemiology, prevention, and treatment.
Top
From Arthritis Research & Therapy
What Epidemiology Has Told Us about Risk Factors and Aetiopathogenesis in Rheumatic Diseases
Jacqueline E Oliver; Alan J Silman
Posted: 02/03/2010; Arthritis Research & Therapy. 2009;11(3) © 2009 BioMed Central, Ltd.
Abstract and Introduction
Abstract
This article will review how epidemiological studies have advanced our knowledge of both genetic and environmental risk factors for rheumatic diseases over the past decade. The major rheumatic diseases, including rheumatoid arthritis, juvenile idiopathic arthritis, psoriatic arthritis, ankylosing spondylitis, systemic lupus erythematosus, scleroderma, osteoarthritis, gout, and fibromyalgia, and chronic widespread pain, will be covered. Advances discussed will include how a number of large prospective studies have improved our knowledge of risk factors, including diet, obesity, hormones, and smoking. The change from small-scale association studies to genome-wide association studies using gene chips to reveal new genetic risk factors will also be reviewed.
Introduction
This article will review epidemiological studies that have advanced the knowledge of both genetic and environmental risk factors for the rheumatic diseases, outlining the major advances that have been achieved over the past decade (Table 1). It will focus on the following diseases: rheumatoid arthritis (RA), juvenile idiopathic arthritis (JIA), psoriatic arthritis (PsA), ankylosing spondylitis (AS), systemic lupus erythematosus (SLE), scleroderma (Scl), osteoarthritis (OA), gout, and fibromyalgia (FM) and chronic widespread pain (CWP).
A number of large prospective studies have improved our knowledge of risk factors: the Framingham Study[1] and the Chingford 1000 Women Study[2] for OA, the Nurses' Health Study cohort for RA[3] and SLE,[4] the European Prospective Investigation of Cancer in Norfolk (EPIC-Norfolk) for inflammatory polyarthritis,[5] and the Health Professionals Follow-up Study for gout.[6] These types of studies provide valuable and robust information. Unfortunately, epidemiological data often are obtained from retrospective studies and underpowered case-control studies, resulting in contradictory findings (for example, studies on the role of caffeine in RA). Although some of the studies have found significant associations with novel risk factors, these studies often suffer from poor design. Meta-analyses have also been performed in an attempt to form conclusions from the available epidemiological data and these are also discussed.
Over the past decade, genetic research has moved from the approach of small-scale association studies, to test for candidate genes in case-control studies, to whole-genome scans of linkage based on sibling pairs which proved to be limited in the small numbers of both pairs and markers (both in the hundreds). The more recent and exciting approach has been genome-wide association studies using gene chips which have allowed hundreds of thousands of single-nucleotide polymorphisms (SNPs) to be investigated as exemplified by the Wellcome Trust Case-Control Consortium (WTCCC) study of common diseases (including RA).[7] The advantage to this approach is clearly the opportunity to identify novel genes for the diseases; however, the disadvantage is that it results in large numbers of hints that require verification in further studies to validate the results.
In general, the studies discussed in this review identify risk factors in whole populations of patients with the disease but it is more likely that each of the individual disease phenotypes results from a number of different combinations of genetic and environmental risk factors. Thus, some risk factors may have a strong effect but only in a small proportion of patients, whereas others will have weak effects and be present in a greater number of individuals but require the involvement of other risk factors. Thus, the size of any increased risk is not a reflection of the level of its attribution to disease causation. However, the sense of strength of risk in this review has been split arbitrarily into three groups based on the typically reported strength of association: 'small' (odds ratio [OR] or relative risk [RR] of less than 2), 'moderate' (OR or RR of between 2 and 5), or 'substantial' (OR or RR of greater than 5).
Rheumatoid Arthritis
Environmental Risk Factors
Studies of environmental risk factors in RA have focused on diet, smoking, and hormones.[8] Several studies have investigated consumption of coffee/tea/caffeine as a risk factor but with mixed conclusions. Caffeine has been reported to moderately increase the risk of rheumatoid factor (RF)-positive RA, but no increased risk for RF-negative RA was found.[9] Decaffeinated coffee has been associated with a moderately increased risk of RA, whereas tea has been shown to have a protective effect.[10] The authors suggest that the decaffeination process (use of industrial solvents) and small traces of solvents may play a role in the disease whereas tea may have both anti-inflammatory and antioxidative properties.[10] However, other studies have found no association of caffeine/coffee consumption with RA.[3] Clearly, studies that are more robust are needed to verify these results.
The so-called 'Mediterranean diet' has been linked with health benefits for a number of diseases and this is also true for RA.[11,12] High consumption of olive oil, oil-rich fish, fruit and vegetables,[13] or vitamin D[14] has been shown to have a protective role in the development of RA. High consumption of red meat and meat products[5] has been associated with a moderately increased risk of inflammatory polyarthritis, but no risk was found in a more recent study.[15]
Data on the link between smoking and RA are more compelling and include recent studies implicating a gene-environment interaction (see below). The duration and intensity of smoking have been linked to the development of RA in postmenopausal women.[16] Current smokers and those who had quit for 10 years or less were found to have a small increased risk of RA, whereas those who had quit for more than 10 years had no increased risk. Heavy cigarette smoking has been linked with a substantial increased risk of RA[17] (over 13-fold) and there was an increasing association between increasing pack-years of smoking and RA. Current smoking has been found to be a risk factor for RA, with the risk moderately increased in men and more so in men with seropositive RA.[18] Other studies have also shown a small increased risk due to smoking for seropositive RA in both women and men but have not shown an increased risk for seronegative RA.[19] This risk was evident in subjects who had long-term smoking habits (>20 years) and was evident even if daily smoking intensity was only moderate. Duration of smoking rather than intensity has also been found to be a risk factor in a study of female health professionals.[20] Smoking has also been linked with an increase in both the severity of RA and disease activity,[21,22] supporting a role for smoking in the development of RA. Other host factors that have been associated with RA include blood transfusion and obesity[23] and (high) birth weight,[24] which have been linked with a moderate increased risk, and breast-feeding[25] and alcohol,[26] which have been linked with a decreased risk/protective role. Stress has also been reported to have a role in the development of RA.[27]
Genetic Risk Factors
Genetic factors implicated in RA have been widely studied using both candidate genes and whole-genome screens.[28] Whereas the strongest genetic risk factor for RA remains the HLA DRB1 shared epitope (SE), other candidate genes have been consistently implicated. In particular, an SNP (R620W) in the protein tyrosine phosphatase (PTPN22) gene, which has regulatory activities for both T and B cells, has been associated with RA;[29] furthermore, this has been replicated in well-powered studies in different populations.[30–33] This polymorphism has been associated with other autoimmune diseases, including JIA and SLE.[28] Studies on peptidyl arginine (PADI4) have shown a significant association[34] but so far this has been replicated in one other Japanese study[35] only and not in populations from the UK,[36] France,[37] or Spain.[38] A recent meta-analysis of three Asian and six European studies has shown that PADI4 polymorphisms were associated with Asian populations; in European populations, only PADI4_94 had a significant association.[39] Genes such as CTLA4, FCRL3, and major histocompatibility complex 2A (MHC2A) have also been the focus of recent research.[28]
The search for novel genes has been advanced by the powerful approach of genome-wide association studies as typified by the UK WTCCC. This has identified three genes with independent associations for RA: two that have been reported to have strong associations (HLA-DRB1 and PTPN22) and a further one on chromosome 7 that had different genetic effects between genders with a strong and apparently additive effect on disease status in females.[7] Further susceptibility loci are likely to be discovered using this approach. Similarly, alleles from 14 genes from over 2,300 cases and 1,700 controls from the North American Rheumatoid Arthritis Consortium (NARAC) (the US version of the WTCCC) and the Swedish Epidemiological Investigation of Rheumatoid Arthritis (EIRA) collections have supported evidence for association of RA with PTPN22, CTLA4, and PADI4 (NARAC cohort only).[40] There is also evidence that there is a genetic overlap with other autoimmune diseases (SLE, AS, multiple sclerosis, and inflammatory bowel disease).[41] One of the newer and possibly more exciting areas of research focuses on evidence that certain polymorphisms can predict the response of a patient to treatment[42] and this is likely to be the focus of a number of future studies.
Gene-environment Interactions
One of the most interesting studies has shown evidence of an important gene-environment interaction between the SE and smoking.[43] This Swedish population-based case-control study showed that the risk of developing RF-positive RA substantially increased in smokers carrying double copies of SE genes (RR = 15.7) compared with smokers with no copies of SE genes (RR = 2.4). Recent research has also shown additive and multiplicative interactions between PTPN22 and heavy cigarette smoking.[44] It has also been proposed that risk factors such as smoking, alcohol and coffee consumption, obesity, and oral contraceptive use may depend on the presence or absence of autoantibodies to cyclic citrullinated peptides.[45,46]
Juvenile Idiopathic Arthritis
Epidemiological studies of JIA have been hampered by a lack of standardised criteria and case ascertainment, resulting in wide-ranging results: reported prevalence ranges from 0.07 to 4.01 per 1,000 children, and annual incidence varies from 0.008 to 0.226 per 1,000 children.[47] Hopefully, the development of new diagnostic criteria will aid future studies in having results that are more consistent. Ethnicity has been studied and European descent has been associated with a moderately increased risk of JIA; additionally, JIA subtypes differed significantly between ethnic groups.[48] There have been few developments in terms of environmental risk factors, although infection remains the most favoured hypothesis.
Genetic Risk Factors
Major advances in epidemiological studies of JIA have focused mainly on genetic aspects. A genome-wide scan in 121 families (247 affected children) confirmed linkage of juvenile RA to the HLA region.[49] In addition, early-onset polyarticular disease has been linked to chromosome 7q11 and pauciarticular disease has been linked to chromosome 19p13, suggesting that multiple genes are involved in the susceptibility to juvenile RA. Other candidate genes, including polymorphisms in the migration inhibitory factor (MIF) gene, have been associated with JIA. A study of UK JIA patients showed that patients with an MIF-173*C allele had a small increased risk of JIA,[50] and serum MIF levels were also higher in patients with this allele. An SNP in the PTPN22 gene (a gene associated with both RA and SLE) has also been shown to have a novel association with JIA.[30] A recent meta-analysis has confirmed that the T allele and the T/T genotype of PTPN22 C1858T are associated with JIA.[51] Polymorphisms in the NRAMP1 gene may also play a role in the pathogenesis of JIA.[52] There is some evidence that a potentially protective CC genotype of the interleukin-6 (IL-6) gene is reduced in young patients.[53]
Psoriatic Arthritis
Epidemiologically, PsA is a complex disease to study as it is not simple to disentangle whether the risk factors revealed are for the complete disease phenotype of PsA or for one of its two components. Studies that compare PsA with healthy controls are not able to address this.
Environmental Risk Factors
Studies of environmental risk factors for PsA have focused on infection-related triggers and hormones. In a recent case-control study, exposure to rubella vaccination substantially increased the risk of PsA whereas injury requiring medical consultation, recurrent oral ulcers, and moving house all moderately increased the risk of PsA.[54] The strongest associations were with trauma, adding support to the hypothesis of a 'deep Koebner phenomenon' in PsA. These data suggest that infection-related triggers may be relevant and further studies are required to verify these results. In a nested case-control study, corticosteroid use (moderate increased risk) and pregnancy (decreased risk) were both associated with PsA, suggesting that changes to the immune system may play a role in this disease.[55]
Genetic Risk Factors
Developments in the pathogenesis of PsA again have been mainly in the genetic field. There is evidence that caspase recruitment domain 15 (CARD15), a susceptibility gene for Crohn's disease, has a role in PsA, and this is supported by the fact that patients with Crohn's disease have an increased incidence of psoriasis. Initial reports suggested that over 38% of probands with PsA had at least one variant of the CARD15 gene compared with 12% of controls.[56] This pleiotropic autoimmune gene was proposed as the first non-MHC gene to be associated with PsA. Unfortunately, this has not been replicated in German[57] and Italian[58] cohorts; in these cohorts, no such association was found. A novel model that suggests that PsA susceptibility is determined by the balance of activating and inhibitory composite killer Ig-like receptor-HLA genotypes has been proposed.[59] Class I MHC chain-related gene A (MICA) may confer additional susceptibility to PsA. The MICA-A9 triplet repeat polymorphisms were present at a substantially higher frequency in PsA patients.[60] A linkage scan reported evidence that suggests that a locus on chromosome 16q is implicated in PsA; furthermore, the logarithm of the odds (LOD) score is much higher for paternal transmission than maternal transmission (4.19 and 1.03).[61] Functional cytokine gene polymorphisms have also been associated with PsA,[62] with tumour necrosis factor-alpha (TNF-?) -308 and TNF-? +252 polymorphisms being significantly associated with age at psoriasis onset, presence of joint erosions in PsA, and progression of joint erosions in early PsA. A genome-wide association study recently replicated associations of PsA with IL-23 receptor and IL-12B polymorphisms and also identified a novel locus on chromosome 4q27.[63] A case-control study found evidence that HLA-Cw*06 and HLA-DRB1*07 are associated with the occurrence of type I psoriasis in patients with PsA, suggesting that the primary association is with age of onset of psoriasis.[64]
Ankylosing Spondylitis
Most of the epidemiological advances in AS have come from the ascertainment of novel genetic associations. Few environmental risk factors have been studied.
Genetic Risk Factors
Epidemiological studies have focused on the genetics behind AS. Twin studies have estimated the influence of genetics on the aetiopathogenesis of AS, indicating that additive genetic effects account for 94% of the variance in the causation of AS.[65] Genome-wide scans have confirmed the strong linkage of the MHC with AS, which is not surprising given the overwhelming relationship between HLA B27 and AS. However, this study suggested that only 31% of the susceptibility to AS is from genes in the MHC.[66] Thus, the search for non-MHC genes has gained much interest.[67] One of the most exciting developments has been the identification of two new loci for AS from a major genetic association scan: ARTS1 and IL-23R.[68] It was calculated from these studies that these genes are responsible for 26% (ARTS1) and 9% (IL-23R) of the population-attributable risk of AS. Another strong non-MHC linkage lies on chromosome 16q (overall LOD score of 4.7).[69] Other scans have identified regions on chromosomes 6q and 11q.[70] Combined analysis of three whole-genome scans by the International Genetics of Ankylosing Spondylitis Consortium showed that regions on chromosomes 10q and 16q had evidence suggestive of linkage. Other regions showing nominal linkage (in two or more scans) were 1q, 3q, 5q, 6q, 9q, 17q, and 19q. Evidence was also confirmed for regions previously associated with AS on chromosomes 2q (the IL-1 gene cluster) and 22q (cytochrome P450 2D6 [CYP2D6]).[71]
A linkage study of chromosome 22 in families with AS-affected sibling pairs found that homozygosity for poor-metaboliser alleles in the CYP2D6 (debrisoquine hydroxylase) gene was associated with AS. The authors of that study postulated that altered metabolism of a natural toxin or antigen by this gene may increase the susceptibility to AS.[72] AS has also been linked to the IL-1RN*2 allele[73] as have other inflammatory diseases such as ulcerative colitis and Crohn's disease.
Systemic Lupus Erythematosus
Environmental Risk Factors
The majority of research into environmental risk factors for SLE has focused on the role of hormones due to the higher prevalence of this disease in women. In a recent population case-control study, breast-feeding was found to be associated with a reduced risk of SLE, with a trend for the number of babies fed and total weeks of breast-feeding.[74] Women who developed SLE had an earlier natural menopause whereas there was little association with current use or duration of use of hormonal replacement therapy or oral contraceptive pill and no association with the use of fertility drugs. The authors of that study proposed that early natural menopause may be a marker for susceptibility to SLE. However, another study has shown that risk of SLE or discoid lupus was moderately increased among current users of estrogens who had exposure of at least 2 years.[75] A prospective cohort study of women found no relationship between oral contraceptive use, either with duration or time since first use.[4]
There has been a long-standing interest in the role of chemical exposures causing SLE. An interesting association has been found with lipstick use and SLE.[76] Researchers found that using lipstick 3 days per week was significantly associated with a small increased risk of SLE and this may be worth replicating in future studies on environmental risk factors. The authors suggest that chemicals (these include eosin, 2-octynoic acid [a xenobiotic], and phthalate isomers) present in lipsticks may be absorbed across the buccal mucosa and have a biological effect on disease development. Other risk factors associated with an increased risk of SLE include history of hypertension, drug allergy, type I/II sun-reactive skin type, and blood transfusions (all moderately increasing the risk) and family history substantially increasing the risk of SLE.[77] Consumption of alcohol has been inversely associated with the risk of SLE.[77] A small increased risk was found with smoking, but exposure to estrogen or hair-colouring dyes, both of which previously have been proposed as risk factors, was not associated.
Genetic Risk Factors
There has been a major increase in the understanding of the genetics behind SLE, particularly over the last year, and this topic is concisely summarised in a recent review.[78] Two high-density case-control genome-wide association analyses have been published.[79,80] From these studies, overwhelming evidence for the association of various genes with SLE (MHC, ITGAM, IRF5, BLK, and STAT4 [79,80]) and strong evidence for a role for PTPN22 and FCGR2A [51,79,81] have emerged. Other genes for which there is evidence of an association, including the TNF superfamily gene,[82] in which the upstream region of TNFSF4 contains a single risk haplotype for SLE, have also emerged. Gene copy number variation may lead to variation in disease susceptibility as highlighted in studies on the complement component C4 in which patients with SLE had a lower gene copy number of total C4 and C4A.[83] Zero copies or one copy of the C4A gene increased the risk of disease susceptibility, whereas three or more copies appeared to have a protective role. The risk of SLE was substantially greater in subjects with only two copies of total C4, but those with five or more copies of C4 had a reduced risk of disease. Another area of research focus has been on the role of sex chromosomes in the development of SLE, especially given the high incidence in females. An interesting observation was the increased incidence of Klinefelter's syndrome (47, XXY) in male patients with SLE, in whom the frequency was substantially increased (14-fold) compared with men without SLE, suggesting that the susceptibility to SLE could be due to an X-chromosome gene-dose effect.[84]
Scleroderma
Environmental Risk Factors
Epidemiological studies of Scl have focused on the role for toxic environmental exposures. Specifically, studies have carefully investigated silica and organic solvents as both are thought to stimulate the immune system and cause inflammation and increase antibody production. Recent reports show that occupational silica exposure moderately increases the risk of Scl, with medium exposure increasing the risk twofold and high exposure increasing the risk fourfold.[85] There is still interest in the relationship of silicone breast implants and Scl. However, a recent meta-analysis of nine cohorts, nine case-control studies, and two cross-sectional studies found no association with Scl or other connective tissue diseases.[86] Exposure to organic solvents remains a moderate risk factor and the presence of anti-Scl-70 autoantibodies may be an effect modifier as the association was stronger in patients with these antibodies.[87] However, such studies are difficult to undertake as exposure to other chemicals cannot be controlled.
Genetic Risk Factors
There is increasing evidence for a genetic role in Scl development.[88] The familial risk of Scl has been investigated in three large US cohorts with a significant increase in risk observed: 2.6% in families with Scl compared with 0.026% in the general public.[89] Studies of HLA alleles suggest that the DQA1*0501 allele is significantly increased in men with Scl compared with healthy men. This allele was found to be moderately associated with diffuse Scl in men but not with limited Scl.[90] HLA associations have also been studied in mutually exclusive autoantibody subgroups, lending support to the theory that Scl in subgroups are actually separate diseases.[91] Transforming growth factor-beta (TGF-?) and connective tissue growth factor may have roles in Scl but further studies are required.[92,93] Increased expression of TGF receptors may account for the increased production of collagen type I by Scl fibroblasts.[94] Fibrillin-1 SNPs haplotypes have been strongly associated with Scl in Choctaw and Japanese populations.[95] Long-term foetal microchimerism is also still being investigated as a potential risk factor.[96,97]
Osteoarthritis
Environmental Risk Factors
Studies on environmental risk factors for OA have focused on obesity, physical activity, and prior joint injury, all of which may increase stress on the joints. There have been several major cohort studies of OA, including the Framingham Study,[1] the Chingford 1000 Women Study,[2] Bristol OA 500,[98] and the North Staffordshire Osteoarthritis Project (NorSTOP).[99] From these and other studies, a number of risk factors, including high body mass index (BMI), previous injury, and regular sports participation, have been found.[100,101] The main preventable risk factor, and hence the subject of many reports, is obesity, which has been shown to substantially increase the risk of knee OA.[100,102] A moderate influence of obesity has also been found with hip OA.[103] Data from adult twins (St. Thomas' Hospital Adult Twin Registry) have shown a moderate association between high BMI and knee OA (OR = 3.9).[104] Manek and colleagues, who gathered those data, also concluded that this association was not influenced by shared genetic factors. Other influences have been the effect of physical activity on OA.[105] One study found a moderate association between heavy physical workload and hip OA.[106] High levels of physical activity were found to be a moderate risk factor for OA of the knee/hip joints in men younger than 50 years.[107]
Men with maximal grip strength have been found to have a moderately increased risk of OA in the proximal interphalangeal, metacarpophalangeal (MCP), and thumb base joints, and women with maximal grip strength have been found to have a moderately increased risk of OA in the MCP joints.[108] There is some evidence that occupation can increase the risk of hand OA. A recent case-control study showed that occupations involving repetitive thumb use and jobs in which there were perceived to be insufficient breaks were associated with OA of the carpometacarpal (CMC) joints.[109] However, not all studies agree and a cross-sectional study found no association with occupation, physical activity, or sports participation but found a moderate increase in risk for hand OA for self-reported digital fracture.[110]
Genetic Risk Factors
Genetic studies in female twins have estimated that the genetic contribution to radiographic hip OA is 58% for OA overall and 64% for joint space narrowing.[111] Studies have revealed that disease risk differs for males and females at different sites and thus there may be specific genes rather than a single OA phenotype.[112] The IL-1 gene cluster is a key regulator in a number of chronic disease processes, and within this cluster, haplotypes such as IL1A-IL1B-IL1RN, which confers a moderate increase in the risk of OA, and IL1B-IL1RN, which confers a fivefold reduced risk, have been identified.[113] This cluster has also been proposed to confer susceptibility for knee OA but not hip OA.[114] Functional polymorphisms in the frizzled motif associated with bone development (FRZB) genes have been found to confer susceptibility to hip OA in females.[115] Radiographic OA is also associated with genotypes of the insulin-like growth factor I gene.[116]
Data from the Rotterdam study showed that polymorphisms in the estrogen receptor-alpha (ESR1) gene are associated with radiographic knee OA in elderly men and women.[117] In a case-control study, several candidate genes were investigated: the strongest associations with clinical knee OA were found with a haplotype in ADAM12 (a disintegrin and metalloproteinase domain 12) and ESR1 in women[118] and again with ADAM12 in men along with the CILP (cartilage intermediate layer protein) haplotype. There is also evidence that the cyclooxygenase-2 enzyme encoded by PTGS2 has a role in the pathogenesis of knee OA.[119] The iodothyronine-deiodinase enzyme type 2 (DIO2) gene has been identified as a new susceptibility locus for OA, using a genome-wide linkage scan.[120] A meta-analysis of more than 11,000 individuals provided evidence for an SNP in GDF5 having a positive association with knee OA in both European and Asian cohorts.[121] Other genes so far implicated include the IL-1 gene cluster, matrilin-3 gene, IL-4 receptor, frizzled-related protein-3 (FRZB) gene, metalloproteinase gene ADAM12, and the asporin (ASPN) gene.[122] An ambitious study that will screen over 8,000 people with hip or knee OA and 6,000 healthy controls – arcOGEN (Arthritis Research Campaign Osteoarthritis GENetics)[123] – has been recently been announced and is likely to lead to the identification of further genes associated with OA.
The Dutch GARP (Genetics, Arthrosis, and Progression) study has shown that there is a moderate increased risk for familial aggregation of both hand and hip OA whereas there was no increased risk for knee OA.[124] That there should be greater genetic effects on OA of the hand compared with other sites is not surprising given the relatively weaker role for environmental (including mechanical) factors. The familial risk of hand OA has shown a moderate increase in risk in sisters of women affected with hand OA and this risk was substantially increased with the severity of the disease, with sisters of those with severe first CMC OA having an RR of 6.9.[125] Whole-genome linkage scans on female twins have shown significant linkage of distal interphalangeal (DIP) OA on chromosome 2 and Tot-KL (Kellgren-Lawrence score for both hands) on chromosome 19.[126] Polymorphisms in the vitamin D receptor (VDR) gene have also been associated with symmetrical hand OA, with a novel finding of a joint effect of low calcium intake and VDR polymorphisms (aT haplotype) having a moderate increased risk of symmetrical hand OA.[127] Data from the Framingham Study have shown that several chromosomes (DIP joint on chromosome 7, first CMC joint on chromosome 15, and two sites in the female DIP joint on chromosome 1 and first CMC joint on chromosome 20) contain susceptibility genes for hand OA and that a joint-specific approach rather than a global approach to hand OA may be more useful in further investigations of these regions.[128] Genome-wide scans have also revealed linkage peaks on chromosomes 4q, 3p, and the short arm of chromosome 2 for idiopathic hand OA.[129] Genome-wide significance was reached for a locus on chromosome 2 for first CMC and DIP joints coinciding with the MATN3 gene, which encodes the extracellular matrix protein, matrilin-3.
Gout
Environmental Risk Factors
Studies on environmental risk factors for gout have focused mainly on the long-established risk factors of high purine diet and diuretic use. The incidence of gout is increasing[130] and high alcohol consumption is no longer the only risk factor for the disease.[131] Other risk factors that have been proposed include longevity, metabolic syndromes,[132] and use of certain pharmacologic agents.[133] The high incidence in some ethnic groups has no obvious host factor, and genetic factors may be implicated in these groups.
Dietary factors have a strong association with gout. Much of the research in this area has been conducted by Choi and colleagues.[6,134–137] As part of a large prospective study in men (the Health Professionals Follow-up Study), a number of factors were associated with an increased risk of gout. Higher adiposity, hypertension, and diuretic use were all moderate risk factors, whereas weight loss had a protective role.[136] High intake of sugar-sweetened drinks and high fructose intake from fruit juice and fruit have been associated with a small increased risk of gout.[137] High meat intake and seafood intake (purine intake) have also been positively associated with gout with a small increase in risk.[6] In the same study, long-term coffee consumption was inversely associated with gout.[138] Consumption of low-fat dairy products has been shown to decrease the risk of gout;[6] milk proteins (casein and lactalbumin) can reduce serum uric acid levels in healthy individuals.
Genetic Risk Factors
Advances in the genetic factors behind gout have included a variation in the SLC2A gene, which appears to make it more difficult for uric acid to be removed from the blood.[139] A polymorphism in the TNF-? promoter gene has been shown to be significantly associated with gout.[140] Genetic studies have included families with purine metabolism defects and case-control studies of isolated aboriginal cohorts with primary gout.[133]
Fibromyalgia and Chronic Widespread Pain
These poorly defined conditions are nonetheless the target of many investigations seeking to unravel risk factors for their causation or severity.
Environmental Risk Factors
Studies on environmental risk factors for FM and CWP have focused on physical trauma and psychosocial factors. Physical trauma in the months prior to disease onset has been significantly associated with FM.[141] FM was found to be 13 times more likely in patients who had a prior injury to the cervical spine compared with those with injuries to the lower extremities.[142] In a population-based prospective study, three psychosocial factors independently predicted a moderate increased risk of the development of CWP: somatisation, health-seeking behaviour, and poor sleep.[143]
Subjects with all three factors had a substantial increased risk of developing CWP.
There may be biologically based risk factors. Thus, abnormalities in the hypothalamic-pituitary-adrenal (HPA) stress-response system may predict the onset of CWP. In a recent study, high levels of cortisol after dexamethasone and high levels in evening saliva moderately increased the risk of CWP.[144] Low levels in morning saliva were also associated with a small increase in risk. These factors were both independent and additive predictors of CWP, with over 90% of new-onset cases of CWP being identified by one or more of these HPA factors.
Genetic Risk Factors
Perhaps surprisingly, there have been some interesting suggestions of a genetic basis to FM. FM has been shown to aggregate strongly in families: the odds of FM in a relative of a proband with FM versus the odds of FM in a relative of a proband with RA was 8.5.[145] Genotypes in the promoter region of the serotonin transporter gene (5-HTT) were analysed in FM patients. A higher frequency of the S/S genotype was found in patients compared with controls,[146] supporting the hypothesis of altered serotonin metabolism in FM patients. Family studies have also shown significant genetic linkage of the HLA region to FM.[147] Polymorphisms in the gene encoding the COMT (catechol-O-methyltransferase) enzyme may also have a role in FM as certain genotypes combined are higher in patients than controls and a third genotype was significantly lower in control groups.[148]
Over the last 10 years, there have been some major epidemiological advances, particularly in the field of genetic risk factors, in which new candidate genes have been identified and useful gene-environment interactions have been studied. Studying lone environmental factors has been less fruitful. The problem epidemiologically is that these factors often explain only a small number of cases, and on their own, they are not sufficient to cause the disease; both of these issues present considerable epidemiological challenges. The hope is that, as we begin to understand more about the genetics behind the diseases and genetic studies become more technically practical, it will enable stratification by genetic subgroups to identify environmental triggers (such as smoking). However, in other disease areas, progress has been very slow and we still understand very little.
Top
From Current Opinion in Rheumatology
Uric Acid in Heart Disease
A New C-reactive Protein?
Eswar Krishnan; Jeremy Sokolove
Posted: 02/23/2011; Curr Opin Rheumatol. 2011;23(2):174-177. © 2011 Lippincott Williams & Wilkins
Abstract and Introduction
Abstract
Purpose of review To review and interpret the recently published data on hyperuricemia and cardiovascular disease to present an opinion on the nature of link between serum uric acid concentration and the risk for cardiovascular outcomes, and to comment on its implications for clinical practice.
Recent findings Evidence has accumulated in prospective observational studies that link hyperuricemia among younger adults with the risk of subsequent hypertension. Such associations have been observed with respect to insulin resistance, diabetes, and other cardiovascular risk factors. Newer data confirm the link between hyperuricemia and cardiovascular mortality. The use of allopurinol has been shown to be associated with reduced mortality risk in longer term observational studies and with reduced blood pressure in short-term randomized controlled trials. None of these findings is confounded by traditional risk factors.
Summary The available evidence has established a link between hyperuricemia and cardiovascular disease and this may be causal. Without waiting for the resolution of causality arguments, one can start using serum uric acid concentration as an inexpensive cardiovascular risk marker.
Introduction
Elevated levels of serum uric acid (SUA) have been associated in population studies with an increased risk of cardiovascular disease (CVD).[1,2•] What remains unclear is the causality of this relationship, namely does SUA contribute independently to the pathophysiology of CVD or is it simply an epiphenomenon due to the presence of concurrent-related conditions such as hypertension, kidney disease, or the metabolic syndrome? This article will provide an overview of the current evidence linking hyperuricemia and CVD and address potential mechanisms which could implicate it directly with CVD or its risk factors, especially hypertension. Finally, we will use evidence provided by observational and early prospective studies of CVD risk reduction by urate-lowering therapy.
Individual levels of SUA vary according to genetic background, renal function, diet, and metabolic factors. From an epidemiological perspective, a large number of prospective observational studies and case–control studies have linked elevated serum uric acid concentration with adverse outcomes such as acute myocardial infarction, hypertension, heart failure, peripheral vascular disease, stroke, and the metabolic syndrome.
Fortunate is he, who is able to know the causes of things [Virgil 29 BC, Georgica (II, v. 490)].
Hypertension
Animal studies offer the opportunity for interventions well beyond those that can be applied in human studies. Animals, unlike humans, possess the uricase enzyme which breaks down uric acid to easily excretable allantoin. Thus lower animals cannot spontaneously develop hyperuricemia. However, in murine models of hyperuricemia, hypertension has been observed which, on treatment with allopurinol, lowered blood pressure concurrent with a fall in SUA while the antihypertensive agent hydrochlorothiazide had no antihypertensive effect.[3]
Small-scale human studies have observed a similar antihypertensive effect of urate-lowering therapy. In a 12-week study,[4] allopurinol reduced both systolic and diastolic blood pressure in a cohort of 21 hyperuricemic patients. Similarly an antihypertensive effect was demonstrated in obese adolescent males with hypertension[5] and the same investigations have more recently presented evidence in abstract form in which they demonstrate the ability of allopurinol to lower blood pressure and demonstrate that this effect is greater than that seen with the uricosuric/antihypertensive agent losartan.
Cardiomyopathy
Regardless of the cause, the presence of dilated cardiomyopathy is associated with left ventricular strain, impaired myocardial oxygen consumption, and endothelial dysfunction. Observational and interventional studies in animals and humans have demonstrated the ability of allopurinol to improve these effects. One study[6] performed in dogs demonstrated that intravenous allopurinol improved myocardial efficiency in animals with dilated cardiomyopathy.[6] Given the four-fold increase of xanthine oxidase activity in the failing heart and therapy limited to a single dose of allopurinol it is possible that this effect was independent of SUA reduction. Similar studies in humans have demonstrated the ability of intracoronary allopurinol to increase myocardial energy metabolism.[7] Upregulation of myocardial xanthine oxidase in this context may indicate that allopurinol is effective via xanthine oxidase inhibition. Doehner et al. [8] demonstrated the ability of allopurinol to improve peripheral vasodilator capacity by randomizing hyperuricemic chronic heart failure (CHF) patients to allopurinol 300 mg/day or placebo for 7 days. In the treatment group, a significant improvement was seen in endothelium dependent flow. This effect was accompanied by a significant reduction in SUA as well as a reduction in allantoin, a marker of free radical generation. Finally, another study[9] in humans has demonstrated the ability of allopurinol therapy to reduce serum B-type natriuretic peptide (BNP). However, despite these biomarker changes allopurinol did not alter exercise capacity in chronic heart failure.
Coronary Artery Bypass Grafting
A recent study[10] suggested that hyperuricemia may be an independent risk factor for adverse events, including decreased survival, after coronary artery bypass grafting (CABG). This study took into many clinical covariates, but it remained possible that the uric acid is associated with comorbidities which portend decreased long-term survival. Supporting the independent risk of SUA in the perioperative period are studies demonstrating that allopurinol can reduce perioperative cardiac events[11] and improve postoperative recovery in those undergoing CABG.[12] However, given the relatively brief duration of allopurinol therapy in these studies it is possible that the protective effect is provided by its antioxidant rather than hypouricemic impact.
Angina
The most recent interventional study by Norman et al. [13••] entailed 65 patients with angiographically demonstrated coronary artery disease, chronic stable angina, and a positive exercise stress test. These were randomized to high dose (600 mg/day) of allopurinol or placebo for 6 weeks and then crossed over to active drug or placebo. In the treatment group, allopurinol increased median time to chest pain, median exercise time, as well as time to ST depression. The authors excluded those with a history of gout, and the authors attributed the observations to other, nonhypouricemic, effects of xanthine oxidase inhibition.[14,15]
The Case against and for Hyperuricemia as a Causal Link
There are three arguments against causality in the hyperuricemia CVD link. Most of the contrary evidence supporting these has been epidemiologic in nature.
Argument 1: Coincidence The proponents of this argument believe that there is no independent association of hyperuricemia with CVD as some studies have failed. Some studies have failed to confirm such a link.[16,17] The argument goes that the relationship is coincidence or due to confounding risk factors such as obesity and hypertension. The epidemiological studies that failed to discern any independent association (leave alone causative relation) are far fewer than those that do show such a link. These entailed younger, and generally healthier, populations and may have lacked the power to identify the contribution of hyperuricemia to cardiovascular outcomes. The power needed to show a conclusive null result is much higher than that needed to show a positive link, as was the case of the two pivotal randomized controlled trials[18,19] that showed that aspirin did not prevent recurrent myocardial infarction.
Argument 2: Reverse Causation This line of argument admits the existence of an independent association between hyperuricemia and cardiovascular disease but attributes it to residual confounding by other risk factors such as chronic renal failure,[20] hyperlipidemia,[21] and the metabolic syndrome.[22–24] One study[25] found a correlation between serum urate and carotid intima media thickness (cIMT) but this effect was lost after adjustment for relevant behavioral and biologic correlations. This conclusion is contrary to several other studies demonstrating an association of hyperuricemia with cIMT atherosclerosis even after adjustment for covariates.[26,27•,28]
Argument 3: Common-causal Genetic Variable This is the strongest argument against uric acid as a causal agent in CVD. In 2002, Ghei et al. [29] summarized the genetic links of hyperuricemia trait with those for cardiovascular risk factors such as dyslipidemia, renal dysfunction, and impaired glucose metabolism. However, nonepidemiological interventional data are more supportive of hyperuricemia as a causal factor.
As discussed above, recent interventional study using allopurinol was the demonstration by Norman et al. [13••] in which 65 patients with angiographically demonstrated coronary artery disease, chronic stable angina, and a positive exercise stress test, were randomized to high dose (600 mg/day) of allopurinol or placebo for 6 weeks and then crossed over to active drug or placebo. In the treatment group, allopurinol increased median time to chest pain, median exercise time, as well as time to ST depression. The authors excluded those with a history of gout, and levels of SUA were not documented as relevant. The authors acknowledged that the precise mechanism of the antiischemic effect remains unclear. The discussion attributed the effect to blocking the xanthine oxidase-mediated conversion of molecular oxygen to produce mediators of oxidative stress[14] or inhibition of xanthine oxidase-mediated breakdown of ATP to AMP, augmenting high-energy phosphates in the cardiac tissues.[15]
Support also derives from electronic medical record-based study of 9924 hyperuricemic veterans in whom urate-lowering therapy reduced mortality even after adjustments for other prognostic factors (HR 0.77; CI 0.65–0.91). Furthermore, several human and animal studies have prospectively applied urate-lowering therapies to the treatment of direct CVD risk factors including hypertension and more directly to,[13••] cardiomyopathy[7] and coronary artery bypass grafting.[12]
The C-reactive Protein Analogy
There are several striking similarities between SUA and C-reactive protein (CRP) as markers of coronary risk. CRP is a liver-produced protein that is up-regulated in many clinical situations.[30] Significant numbers of observational studies showed that a high level of CRP is an independent risk factor for CVD.[30,31] In some interventional studies, reduction of CRP was associated with improved cardiovascular risk although a recent systematic review concluded that the case was weak.[30,32] Biological mechanisms linking CRP to CAD have been elucidated.[33] Some have argued that CRP is in the causal pathway for coronary artery disease, but others dispute this assertion.[33–35] A consensus is evolving in the cardiology community that CRP can be useful in the prediction of coronary artery disease.
The recent JUPITER study[33] has demonstrated that in those with elevated CRP, even in the setting of normal levels of LDL, initiation of HMG-CoA reductase inhibitor (Statins), reduced risk of cardiovascular events including myocardial infarction, stroke, need for cardiac revascularization as well as all-cause mortality. Though a clear causal role for CRP in the initiation or propagation of CVD is uncertain, use of this marker has identified both those at risk for CVD, as well as those who may benefit from risk reduction interventions.[36] It is possible to envisage a similar use of serum urate therapies. The use of CRP is also being followed as a biomarker of therapies.[37] Whether nonhypouricemic cardiovascular risk modifications will impact on serum uric acid in a similar fashion has not been studied. The metabolic syndrome and serum insulin levels can influence serum urate levels and it is therefore conceivable that serum urate could serve as a barometer for metabolic and cardiovascular status.
Conclusion
Is uric acid a maker or a marker? We believe that this issue is secondary. The primary question is whether we can use the data on hyperuricemia and CVD to help patients in the real world. We believe that the answer is a resounding yes. We propose that serum uric acid concentrations can be used as a cardiovascular risk marker in the same way that C-reactive protein is being used. It remains important that investigations continue into the potential pathologic role played by uric acid. This is critical as studies continue into the use of urate-lowering therapy for cardiovascular risk modification.
Top
From Arthritis Research & Therapy
Protein, Iron, and Meat Consumption and Risk for Rheumatoid Arthritis: A Prospective Cohort Study
Elizabeth Benito-Garcia; Diane Feskanich; Frank B Hu; Lisa A Mandl; Elizabeth W Karlson
Posted: 03/29/2007; Arthritis Research & Therapy. 2007;9(1) © 2007 BioMed Central, Ltd.
Abstract and Introduction
Abstract
A recent prospective study showed that higher consumption of red meat and total protein was associated with increased risk for inflammatory polyarthritis. We therefore prospectively examined the relationship between diet (in particular, protein, iron, and corresponding food sources) and incident rheumatoid arthritis (RA) among 82,063 women in the Nurses' Health Study. From 1980 to 2002, 546 incident cases of RA were confirmed by a connective tissue disease screening questionnaire and medical record review for American College of Rheumatology criteria for RA. Diet was assessed at baseline in 1980 and five additional times during follow up. We conducted Cox proportional hazards analyses to calculate the rate ratio of RA associated with intakes of protein (total, animal, and vegetable) and iron (total, dietary, from supplements, and heme iron) and their primary food sources, adjusting for age, smoking, body mass index, and reproductive factors. The multivariate models revealed no association between RA and any measure of protein or iron intake. In comparisons of highest with lowest quintiles of intake, the rate ratio for total protein was 1.17 (95% confidence interval 0.89-1.54; P for trend = 0.11) and for total iron it was 1.04 (95% confidence interval 0.77-1.41; P for trend = 0.82). Red meat, poultry, and fish were also not associated with RA risk. We were unable to confirm that there is an association between protein or meat and risk for RA in this large female cohort. Iron was also not associated with RA in this cohort.
Introduction
Rheumatoid arthritis (RA) is associated with both genetic and environmental factors,[1-7] but studies of dietary risk factors have been inconclusive.[8] Studies of diet and risk for RA offer the potential to identify modifiable factors and so prevent RA in high-risk patients; they may also provide insights into disease pathogenesis.
Buchanan and Laurent[9] implicated diets high in protein in the etiology of RA. Furthermore, low-protein diets may improve RA symptoms.[10-13] In ecologic studies, the prevalence of RA is higher in countries with greater consumption of red meat.[14] More recently, Pattison and colleagues[15] reported the first prospective investigation of red meat and risk for inflammatory polyarthritis (IP) and concluded that higher intakes of both red meat and protein increased the risk for IP, whereas iron - another nutrient component of meat - exhibited no association. The authors acknowledged that it remained unclear whether the observed associations were causative or whether meat consumption was a marker for other lifestyle factors.
To examine this issue further, we prospectively assessed risk for RA in relation to intakes of protein, iron, and meat in women in the Nurses' Health Study (NHS). We examined these intakes with further classifications into animal and vegetable protein; dietary, supplemental, and heme iron; and red meat, poultry, and fish.
Materials and Methods
The NHS was established in 1976 when 121,700 female registered nurses (98% white), aged 30-55 years and residing in one of 11 US states, completed and returned the initial NHS mailed questionnaire on their medical history and lifestyle. Every 2 years, follow-up questionnaires have been sent to obtain up-to-date information on risk factors and to identify newly diagnosed diseases. Deaths are reported by family members or by the postal service in response to the follow-up questionnaires. In addition, we use the National Death Index to search for nonrespondents who might have died in the preceding interval. By comparing deaths ascertained from independent sources, we estimate that we have identified at least 98.2% of deaths occurring in the cohort.[16]
The Partners HealthCare Institutional Review Board approved all aspects of this study, and all participants gave informed consent before they were entered into the study.
Ascertainment of Rheumatoid Arthritis Cases
As previously described,[17] self-reports of RA were confirmed using the Connective Tissue Disease Screening Questionnaire[18] and by medical record review for American College of Rheumatology (ACR) criteria for RA,[19] conducted by two rheumatologists. We confirmed 807 cases of incident RA from 1976 to 2002.
Study Population
For all analyses, we excluded the following: prevalent RA cases that were diagnosed before June 1980; RA cases with missing date of diagnosis; women who reported RA or connective tissue disease but in whom the diagnosis of RA was not confirmed by medical record review; nonresponders to the semiquantitative Food Frequency Questionnaire (FFQ) in 1980 (the baseline for this analysis); and participants with an unacceptable FFQ (<500 kcal/day or >3,500 kcal/day, accounting for approximately 4% of returned dietary questionnaires). Women were also censored during follow up when they failed to respond to any subsequent biennial questionnaire, because incident RA could not be identified in these cases. Thus, the final group studied included 82,063 women who were followed from 1980 until 2002 and 546 cases of incident RA who met the inclusion criteria, with a total of 1,668,894 person-years of follow up.
Assessment of Dietary Intake
Dietary intake was assessed in 1980, 1984, 1986, 1990, 1994, and 1998 using a semi-quantitative FFQ. In 1980, a total of 98,462 (81%) of the participants completed the FFQ and the completion rate has remained at about 80% during follow up. The initial FFQ contained 61 food items, but it has been expanded over the years such that 147 foods appeared on the 1998 questionnaire, including nine items for red meat (beef, pork, and lamb), four items for poultry (chicken and turkey), and four items for fish. For each food, participants reported their frequency of consumption of a specified serving size using nine frequency categories, ranging from never to six or more per day.
The validity and reproducibility of the FFQ for nutrients[20] and foods[21] have been documented elsewhere. Intakes calculated from the 1980 FFQ were found to be reasonably correlated with those from four 1-week diet records collected over 1 year among 173 NHS participants.[20,22] The Pearson coefficients were 0.47 for total protein, 0.55 for total iron,[20] and 0.38 for meat.[21]
In this analysis, we examined associations between risk for RA and intakes of the following individual nutrients and components: total protein, animal protein, vegetable protein, total iron, dietary iron (from food sources), supplemental iron (from multivitamins and supplements), and heme iron (the iron with the highest bioavailability). We also examined meat, poultry, and fish (the primary food sources of protein and iron). At the 1998 dietary assessment in this cohort, 19% of protein came from red meat, 14% came from poultry, and 7% from fish. Heme iron also came primarily from the consumption of red meat (28%), poultry (24%), and fish (15%). Supplements contributed 25% of the total iron intake in this cohort.
Assessment of Nondietary Factors
Age, body mass index (weight [in kilograms] divided by height [in meters]2), and smoking status were updated every 2 years with information from the biennial questionnaires. Other factors were reported once: age at menarche in 1976, total months of breastfeeding for all children in 1986, and regularity of menses from age 20 to 35 years (very regular, usually regular, usually irregular, and very irregular) in 1982.
Statistical Analyses
The number of person-years of follow up was ascertained based on the interval between the date of return of the 1980 questionnaire and the date of diagnosis of RA (as defined in the medical record), death, the end of the study period (1 June 2002), or loss to follow up (defined as no further return of questionnaires) for each participant.
Nutrient and food intakes were categorized into quintiles, and incidence rates for RA were calculated by dividing the number of incident cases by the number of person-years in each quintile of dietary exposure. Rate ratio (RRs) were calculated by dividing the incidence rates in the higher quintiles by the corresponding rate in the reference (lowest) quintile. Age-adjusted and multivariate RRs were estimated using Cox proportional hazards models adjusting for age (continuous variable) and other potential counfounders. We controlled for the following variables because they have either been shown to be associated with RA or were found in this study to be potential confounders: body mass index (categorized as <22, 22 to 24.9, 25 to 29.9, 30 to 34.9, and ?35 kg/m2), smoking status (never, past, or present), and total lifetime breastfeeding history (nulliparous, parous, and breastfeeding for 0, 1 to 11, ?12 total months). In addition, we controlled for total energy to reduce measurement error due to general over-reporting or under-reporting of food items.[23] Age at menarche and regularity of menses were not retained as covariates. For all RRs, we calculated the 95% confidence interval (CI). All P values were two-tailed, and P < 0.05 was considered to be statistically significant. Tests for trend were conducted by assigning the median value for each quintile of nutrient and food intake, modeling this variable as a continuous variable.
Nutrient intakes were energy-adjusted using the multivariate residual method.[20] In order to represent the long-term dietary patterns of individual women, our primary analysis used cumulative average food and nutrient intakes from all available dietary questionnaires up to the start of each 2-year interval.[24] For example, the 1980 diet was related to RA incidence during the period from 1980 to 1984; the average of the 1980 and 1984 diets was related to RA incidence during the period from 1984 to 1986; the average of the 1980, 1984, and 1986 diets was related to the RA incidence during the period between 1986 and 1990, and so on, through to 2002.
Age standardized characteristics of the study population in 1990 according to intakes of total protein and heme iron are shown in Table 1 . The 1990 time point was chosen because it represents the approximate mid-point of follow up. Body mass index was higher among women in the highest consumption categories of total protein and heme iron. Women with the lowest protein and highest heme iron consumptions were more likely to smoke and, if parous, they were less likely to have breastfed for a total of 12 months or more. Higher total protein intakes were associated with higher heme iron intakes.
In the age-adjusted model, higher total protein intake was associated with greater risk for RA (quintile 5 [89.0 g/day] versus quintile 1 [60.8 g/day]: RR 1.23, 95% CI 0.94-1.61; P for trend = 0.04), but this association was attenuated and the test for trend was no longer significant in the multivariate model (RR 1.17, 95% CI 0.89-1.55; P for trend = 0.12; Table 2 ). Neither the animal nor vegetable component of protein exhibited any relation to risk for RA. We also did not observe any association with total iron intake (RR 1.00, 95% CI 0.74-1.36 for the highest versus lowest quintile) or with its components of dietary iron, supplemental iron, and heme iron.
No significant associations were observed between the incidence of RA and consumption of total meat, red meat, poultry, or fish ( Table 3 ). For total meat, which included red meat and poultry, the multivariate RR was 0.91 (95% CI 0.67-1.23) in the highest (2.54 servings/day) versus lowest (0.82 servings/day) quintiles of intake. More detailed analyses of individual foods that contribute to each of these major food groups also exhibited no association with RA.
To avoid confounding by indication (for example, dietary changes occurring after RA symptom onset), we also performed analyses in which dietary variables were updated only until the date of first symptom of RA, rather than until the date of RA diagnosis. We also performed lagged analyses such that the dietary intakes associated with RA cases were assessed at least 4 years before the date of diagnosis. In order to account for possible influence of recent dietary intake, we also examined our exposures based on the most recent dietary measures, rather than using long-term average intakes. The results revealed no associations with the nutrient or food exposures.
In this large prospective cohort study involving women, we observed no significant association between protein or iron intakes and risk for RA, including specific analyses of animal and vegetable protein, heme iron, and iron from foods and from supplements. Furthermore, no associations were observed between the primary food sources of these nutrients, namely red meat, poultry, and fish.
Our results differ from those of a nested case-control study[15] that reported increased risk of IP with greater consumption of protein and red meat. Pattison and coworkers[15] studied dietary intake and risk for IP between 1993 and 2002, within a prospective population-based study of cancer incidence in Norfolk, England (European Prospective Investigation of Cancer Incidence [EPIC]). In their study they compared 88 patients with IP, identified by linkage with the Norfolk Arthritis Register (a primary care-based inception study of IP), with 167 age-matched and sex-matched control individuals from EPIC who had remained free from IP during the follow-up period. Although the study did not analyze subtypes of protein, animal and vegetable protein, it did analyze the food sources that contribute to each of these categories. The investigators reported an increased risk for IP with greater protein comsumption (>75.3 g/day versus <62.4 g/day: adjusted odds ratio [OR] 2.9, 95% CI 1.1-7.5) and no association with iron. In contrast to our findings, the study by Pattison and coworkers indicated that individuals with the highest level of consumption of red meat (>58.0 g/day versus <25.5 g/day: adjusted OR 1.9, 95% CI 0.9-4.0) and red meat combined with meat products (for instance, sausage and ham; >87.8 g/day versus <49.0 g/day: adjusted OR 2.3, 95% CI 1.1-4.9) were at increased risk for IP.
The discrepancy between the findings of that study and ours could be attributed to methodologic differences. First, the EPIC study assessed dietary intake once, using a 7-day food diary, whereas we used semiquantitative FFQ assessed repeatedly. The FFQ consists of two components[25]: a food list and a frequency response section for individuals to report how often each food was eaten over the previous year. The 7-day food diary consists of a detailed listing of all foods consumed by an individual on 1 day or more.[26] Food intake is recorded by the individual at the time when the foods are eaten, which has the advantages of relying less on memory and permitting direct assessment of portion sizes. In comparison, the FFQ suffers the disadvantages of restrictions imposed by a fixed list of foods, memory, perception of portion sizes, and interpretation of questions. Dietary records provide more precise quantification of foods consumed, but they only reflect short-term diet, because only a limited number of days of diet records are used. Results of validation studies demonstrate greater correlation of blood levels of certain nutrients with 7-day diet diaries than with FFQ findings.[27]
However, our objective was to assess long-term dietary exposures. Therefore, we cumulatively averaged and updated dietary intake assessed six different times over the 22-year period of follow up, which is known to reduce random error in long-term dietary measurement, rather than relying upon one assessment at baseline. Furthermore, results of analyses of more recent diet were consistent with analyses of cumulative diet. Even if absolute measures are not precise, the FFQ is able to rank respondents into higher and lower categories of intake. We energy-adjusted nutrient intakes in order to account for differences due to under-reporting or over-reporting on the FFQ.
Bingham and coworkers[28] demonstrated a strong association between diet and cancer using 7-day diaries but a modest relationship when the FFQ was used, and they suggested that this pattern might also be seen in other studies analyzing the association of diet and chronic diseases. However, previous studies undertaken in the Nurses' Health Study cohort and others that used the FFQ demonstrated associations between meat and protein and breast cancer, colorectal cancer, lymphoma, coronary heart disease, diabetes, and gout.[29-35]
Finally, it is possible that dietary protein intake differs between the USA and the UK. However, comparisons of the median intake of total protein and total iron in the quintiles used in the present study ( Table 2 ) with the tertiles of intake in the EPIC study[15] demonstrate that the range and categories of intake in the two studies were similar.
A second difference between our study and the EPIC study was that we identified individuals with RA rigorously using the ACR criteria, in which at least four out of seven criteria had to be satisfied in order for a participant to be considered a case. In contrast, the outcome considered by Pattison and colleagues[15] was the presence of IP, which is defined as inflammation affecting two or more peripheral joints and persisting for 4 weeks or longer. Within 5 years, 60% of IP patients satisfy ACR criteria for RA.[36]
Third, discrepancies between our study and the EPIC study might be related to differences in sex, because our study included women only whereas the EPIC study[15] included men and women. It is also possible that the discrepant findings resulted from socioeconomic differences; well educated nurses were enrolled in our study, whereas the EPIC cohort included diverse population-based cases and controls.
Strengths of our study include the large number of incident cases of RA, the repeated prospective assessment of exposures, and the lengthy follow-up period. The validation of self-reported RA through medical record review rather than by physical examination is a potential weakness of the study. However, 82% of the RA cases were diagnosed by ACR members, which adds support to the validity of the diagnoses. There is potential for misclassification of RA cases as noncases when diagnosis relies solely upon medical record documentation. Therefore, those women who self-reported RA or other connective tissue diseases in whom the diagnosis of RA was not confirmed by medical record review were excluded from the analyses. It is possible that the null results from this study are due to unmeasured confounding (for example, socioeconomic status), although there are no strong risk factors for RA that could account for attenuation of a true association. Finally, although the participants in the present do not represent a random sample of women living in the USA, it is unlikely that the biologic relationships among these women will differ from those among women in general.
Conclusion
No clear associations were observed between dietary protein, iron, or meat, including red meat, and risk for RA in this large prospective cohort of women.
Open Access
This research article is open access, which means it is universally and freely accessible via the Arthritis Research & Therapy website, deposited in at least one widely and internationally recognized open access repository (such as PubMed Central), and the copyright rests with the authors.
To access research articles on related topics, visit http://arthritis-research.com/researcharticles.
Acknowledgments
We would like to acknowledge all of the nurses who participate in this study and also Gideon Aweh, for programming assistance.
Funding Information
We would also like to thank the support for this research by grants CA87979, R01 AR42630, P60 AR47782 and R0149880 from the National Institutes of Health.
Abbreviation Notes
ACR = American College of Rheumatology; CI = confidence interval; FFQ = Food Frequency Questionnaire; IP = inflammatory polyarthritis; OR = odds ratio; RA = rheumatoid arthritis; RR = rate ratio.
Arthritis Research & Therapy. 2007;9(1) © 2007 BioMed Central, Ltd.
Copyright to this article is held by the author(s), licensee BioMed Central Ltd. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original citation.
Top
From Kidney International
Recent Advances in the Pathophysiology of Nephrolithiasis
Khashayar Sakhaee
Posted: 06/29/2009
Abstract and Introduction
Abstract
Over the past 10 years, major progress has been made in the pathogenesis of uric acid and calcium stones. These advances have led to our further understanding of a pathogenetic link between uric acid nephrolithiasis and the metabolic syndrome, the role of Oxalobacter formigenes in calcium oxalate stone formation, oxalate transport in Slc26a6-null mice, the potential pathogenetic role of Randall's plaque as a precursor for calcium oxalate nephrolithiasis, and the role of renal tubular crystal retention. With these advances, we may target the development of novel drugs including (1) insulin sensitizers; (2) probiotic therapy with O. formigenes, recombinant enzymes, or engineered bacteria; (3) treatments that involve the upregulation of intestinal luminal oxalate secretion by increasing anion transporter activity (Slc26a6), luminally active nonabsorbed agents, or oxalate binders; and (4) drugs that prevent the formation of Randall's plaque and/or renal tubular crystal adhesions.
Introduction
Calcium oxalate is the most prevalent type of kidney stone disease in the United States and has been shown to occur in 70-80% of the kidney stone population.[1] The prevalence of recurrent calcium oxalate stones has progressively increased in untreated subjects, approaching a 50% recurrence rate over 10 years.[2] The lifetime risk for kidney stone disease currently exceeds 6-12% in the general population.[3,4] In the final quarter of the twenty-first century, the prevalence of kidney stone disease increased in both gender and ethnicity.[4] Although kidney stone nephrolithiasis is perceived as an acute illness, there has been growing evidence that nephrolithiasis is a systemic disorder that leads to end-stage renal disease.[5-7] It is also associated with an increased risk of hypertension,[8-12] coronary artery disease,[13,14] the metabolic syndrome (MS),[15-20] and diabetes mellitus.[19-24] Nephrolithiasis without medical treatment is a recurrent illness with a prevalence of 50% over 10 years.[2] Nephrolithiasis has remained a prominent issue that imposes a significant burden on human health and is a considerable financial expenditure for the nation. In 2005, based on inpatient and outpatient claims, this condition was estimated to cost over $2.1 billion.[25] A novel strategy for the development of new drugs has been hampered largely by the complexity of this disease's pathogenetic mechanism and its molecular genetic basis. Our further understanding of these underlying pathophysiologic mechanisms will be the key step in developing more effective preventive and therapeutic measures.
Etiologic Mechanisms of Uric Acid Stone Formation
Three major factors for the development of uric acid (UA) stones are low urine volume, acidic urine pH, and hyperuricosuria. However, abnormally acidic urine is the principal determinate in UA crystallization. The etiologic mechanisms for UA stone formation are diverse, and include congenital, acquired, and idiopathic causes.[26] The most prevalent cause of UA nephrolithiasis is idiopathic. In its initial description, the term 'gouty diathesis' was coined.[27] The clinical and biochemical presentation of idiopathic UA nephrolithiasis (IUAN) cannot be attributed to an inborn error of metabolism [26,28,29] or secondary causes such as chronic diarrhea,[30] strenuous physical exercise,[31] and a high purine diet.[32]
Three major factors for the development of uric acid (UA) stones are low urine volume, acidic urine pH, and hyperuricosuria. However, abnormally acidic urine is the principal determinate in UA crystallization. The etiologic mechanisms for UA stone formation are diverse, and include congenital, acquired, and idiopathic causes.[26] The most prevalent cause of UA nephrolithiasis is idiopathic. In its initial description, the term 'gouty diathesis' was coined.[27] The clinical and biochemical presentation of idiopathic UA nephrolithiasis (IUAN) cannot be attributed to an inborn error of metabolism [26,28,29] or secondary causes such as chronic diarrhea,[30] strenuous physical exercise,[31] and a high purine diet.[32]
Physicochemical Characteristics of Uric Acid
In humans and higher primates, UA is an end product of purine metabolism. Owing to their lack of the hepatic enzyme, uricase, which converts UA to soluble allantoin, their serum and urinary levels of UA are considerably higher than in other mammals.[33] Normally, urinary UA solubility is limited to 96 mg/l. In humans with a urinary UA excretion of 600 mg/day, this should generally exceed the limit of solubility and susceptibility to precipitation.[34] Moreover, urine pH is another important factor in UA solubility. UA is a weak organic acid with an ionization constant (pKa) of 5.5.[35,36] Therefore, at a urine pH less than 5.5, the urinary environment becomes supersaturated with sparingly soluble, undissociated UA that precipitates to form UA stones[21,37,38] (Figure 1).
Figure 1.
Physicochemical scheme for the development of uric acid stones.
Epidemiology of Uric Acid Nephrolithiasis and the Metabolic Syndrome
The MS is an aggregate of features that increase the risk of type 2 diabetes mellitus (T2DM) and atherosclerotic cardiovascular disease.[15-17] In a retrospective analysis a stone registry in Dallas initially showed a high prevalence of features of the MS in IUAN patients, leading to the determination that patients with IUAN share characteristics similar to those of the MS. Numerous epidemiologic studies have shown that obesity, weight gain, and T2DM are associated with an increased risk of nephrolithiasis.[39,40] Despite the large sample size, stone composition was not reported among these studies. This center first reported the high prevalence of UA stones as the main stone constitute found in T2DM. In addition, recent retrospective and cross-sectional studies have noted an increased prevalence of UA stones among obese and T2DM patients.[23,41-44] However, T2DM and a greater body mass index were shown to be independent risk factors for nephrolithiasis.[44]
Pathophysiology of Low Urine pH in Idiopathic Uric Acid Nephrolithiasis
The metabolic defect suspected for low urinary pH in UA stone formation was described almost four decades ago.[45] Defective ammoniagenesis or excretion was attributed as a possible pathogenetic mechanism. Initial studies showing abnormalities in glutamine metabolism, which resulted in the impaired conversion of glutamine to ?-ketoglutarate and consequently resulted in reduced renal ammonium (NH4 +) excretion, were not supported by further investigation.[46-49] Mechanistic studies, however, have shown that the two major factors responsible for abnormally low urine pH are a combination of defective NH4 + excretion and increased net acid excretion (NAE).
Defective Ammonium Excretion
Under normal circumstances, a tight acid-base balance is maintained with a high capacity buffer, ammonia, (pKa 9.2), which effectively buffers most protons while the remaining protons are buffered by titratable acids. This process works to sustain a normal urinary pH. In contrast, the defective NH4 + excretion in IUAN requires the urine to be buffered mainly by titratable acids to maintain this equilibrium, thus promoting an acidic urinary pH and providing an environment highly conducive for UA precipitation (Figure 2).
Click to zoom
(Enlarge Image)
Figure 2.
Mechanisms of urinary acidification
Increased acid production alone may not be sufficient in causing abnormally acidic urine, as the excreted acid is neutralized by urinary buffers. Evidence of defective NH4 + excretion was provided in IUAN patients under a fixed, metabolic diet.[21,23] Therefore, an unduly acidic urine pH in the IUAN population is not related to environmental factors but it is, in part, related to the higher body weight in these subjects.[50] The defective NH4 + production in these subjects was further explored by the administration of an acute acid load, which amplified the ammoniagenic effect[21] (Figure 3). Similar findings were also demonstrated in IUAN patients on a random diet.[22] Furthermore, it has been shown that in normal persons, urinary pH and NH4 +/NAE ratio falls with increasing features of the MS, indicating that renal ammoniagenesis and low urine pH may be features of the general MS and not IUAN specific.[51]
Figure 3.
Acute acid loading. Previously published in Sakhaee et al. [21]
Acute acid loading. Previously published in Sakhaee et al.
Several studies have provided evidence supporting a relationship between UA nephrolithiasis, obesity, and insulin resistance.[21,23,41-44] The mechanistic connection between peripheral insulin resistance, urinary pH, and NH4 +, was first demonstrated using the hyperinsulinemic-euglycemic clamp technique in patients with IUAN.[24] These studies support the potential role of insulin resistance in an impaired urinary NH4 + excretion and low urinary pH. Insulin receptors are expressed in various portions of the nephron.[52,53] Furthermore, in vitro studies have shown that insulin has a stimulatory function in renal ammoniagenesis.[54,55] In addition, NH4 + secretion is regulated by the sodium-hydrogen exchanger NHE3.[56] As NHE3 has a key function in the transport or trapping of NH4 + in the renal tubular lumen,[56] insulin resistance may potentially lead to defective renal NH4 + excretion. One other plausible mechanism may be substrate competition by substituting circulating free fatty acid for glutamine, which is increased in the MS, thereby reducing the proximal renal tubular cell utilization of glutamine and renal ammoniagenesis.[57]
Increased Net Acid Excretion
An elevated NAE may occur due to increased endogenous acid production or because of dietary influences such as low dietary alkali or the increased consumption of acid-rich foods.[36] Metabolic studies comparing subjects on fixed, low-acid ash diets showed a higher NAE in IUAN patients compared to control subjects, suggesting that endogenous acid production may increase in IUAN [34] (Figure 4). In addition, the urinary NAE for any given urinary sulfate (a surrogate marker of acid intake) tended to be higher in patients with T2DM.[22] These studies also implied that the pathophysiologic mechanism accounting for increased NAE is related to obesity/insulin resistance. Supporting this correlation, additional studies have shown increased organic acid excretion with higher body weight and higher body surface area.[58,59] The nature of these putative organic anions and their link to obesity and/or UA stones has not been fully studied.
Figure 4.
Inpatient net acid excretion. Net acid excretion=NH4 + + TA-(HCO3 -+Cit).
Potential Role of Renal Lipotoxicity
Under standard metabolic conditions, when caloric intake and caloric utilization are well balanced, triglycerides accumulate in adipocytes.[60,61] A disequilibrium in this tight balance leads to the accumulation of fat to non-adipocyte tissues.[61] This process of fat redistribution, termed lipotoxicity, typically affects tissues such as cardiac myocardial cells, pancreatic ß-cells, skeletal muscle cells, and parenchymal liver cells.[61-66]
Cellular injury is primarily due to the accumulation of nonesterified fatty acids and their toxic metabolites including fatty acyl CoA, diacylglycerol, and ceramide.[60,67,68] It has been shown that fat redistribution is accompanied with impaired insulin sensitivity,[63] cardiac dysfunction,[65] and steatohepatitis.[62,69] There is an emerging interest in the role of renal lipotoxicity in the pathogenesis of renal disease.[67,70,71] A few studies have revealed a mechanistic link between obesity, obesity-initiated MS, and chronic kidney disease.[70,71] Additional studies have shown a possible role of sterol-regulating element-binding proteins in renal fat accumulation and injury.[72-74] At the present time, there is insufficient data available to suggest whether renal lipotoxicity influences endogenous acid production and reduces renal ammoniagenesis, consequently leading to abnormally acidic urine.
From the above information, one may propose a three-hit mechanism for the development of low urinary pH and the propensity for UA stone formation. The first mechanism is related to excessive dietary acid intake and/or increased endogenous acid production. However, this alone may not be sufficient in lowering urinary pH. Therefore, the second mechanism is associated with defective NH4 + excretion. Together, these two defects lower urinary pH adequately enough to convert urate salt into undissociated UA. This is necessary but not sufficient for the formation of UA stones. Finally, the possible absence of inhibitors or presence of promoters of UA precipitation is operative in triggering UA stone formation.
Calcium Oxalate Nephrolithiasis
Although it affects both genders, calcium oxalate nephrolithiasis generally tends to occur in more men than women. In the calcium oxalate stone former, urinary oxalate and urinary calcium are equally conducive in raising urinary calcium oxalate supersaturation.[75] Hyperoxaluria is encountered in 8-50% of kidney stone formers.[76-78] The main etiologic causes of hyperoxaluria can be classified into three groups: (1) increased oxalate production as a result of an inborn error in metabolism of the oxalate synthetic pathway, (2) increased substrate provision from dietary oxalate-rich foods or other oxalate precursors, and (3) increased intestinal oxalate absorption.[1] With the study of Oxalobacter formigenes (OF)[79,80] and the role of putative anion transporter Slc26a6[81] as potential tools in the treatment of primary hyperoxaluria, our knowledge of the pathophysiologic mechanisms of oxalate metabolism has advanced significantly over the past decade.[82]
Physicochemical Properties of Oxalate
The human serum oxalate concentration ranges between 1 and 5 µM, however, due to water reabsorption in the kidney, its concentration is 100 times higher in the urine.[1,83] At a physiologic pH, oxalate will form an insoluble salt with calcium. As the solubility of calcium oxalate in an aqueous solution is limited to approximately 5 mg/l at a pH of 7.0, assuming that normal urine volume ranges between 1 and 2 l/day and normal urinary oxalate excretion is less than 40 mg/day, normal urine is often supersaturated with calcium oxalate. However, under normal conditions, the blood is undersaturated with respect to calcium oxalate. As seen in patients with primary hyperoxaluria and renal insufficiency, when the serum oxalate concentration increases to above 30 µM, the blood becomes supersaturated with calcium oxalate.[84] In the plasma, oxalate is not significantly bound to protein and is freely filtered by the kidneys. A recent study reported that urinary calcium is as important as urinary oxalate in raising calcium oxalate supersaturation.[75]
Oxalate Homeostasis
Hepatic Production
In mammals, oxalate is an end product of hepatic metabolism.[79] The major precursor for hepatic oxalate production is glyoxalate metabolism within hepatic peroxisomes. This metabolic conversion is mediated by enzyme alanine-glyoxalate aminotransferase. Under normal circumstances, the metabolism of glyoxalate to glycolate and glycine determines the conversion of glyoxalate to oxalate. Glyoxalate is also metabolized to glycolate by enzyme D-glycerate dehydrogenase, which has both glyoxalate/hydroxypyruvate reductase activity.[85] An inborn error in metabolism with alanine-glyoxalate aminotransferase and glyoxalate/hydroxypyruvate reductase deficiency leads to oxalate overproduction, which results in type 1 and type 2 primary hyperoxaluria.[79,81,85] Several other metabolic precursors of oxalate metabolism, including the breakdown of ascorbic acid, fructose, xylose, and hydroxyproline, have also been incriminated. However, their influences on oxalate production, under normal physiologic circumstances, have not been fully accepted.[86-88]
Intestinal Absorption
Dietary oxalate intake is important in urinary oxalate excretion. The estimated intake of oxalate ranges between 50 and 1000 mg/day.[77,78,89] Oxalate-rich foods primarily include seeds, such as chocolate that is derived from tropical cacao tree, and leafy vegetation, including spinach, rhubarb, and tea. Approaching approximately 45%, the contribution of dietary oxalate to urinary oxalate excretion has been shown to be much higher than previously described.[90] In addition, with intestinal oxalate absorption ranging between 10 and 72%, this relationship between oxalate absorption and dietary oxalate intake has not been shown to be linear.[90]
In humans, the exact intestinal segment participating in oxalate absorption has not been determined. Indirect evidence suggests that oxalate absorption occurs throughout a large segment of the small intestine. This has been proposed as the main percentage of absorption occurs during the first 4-8 h after the ingestion of oxalate-rich foods.[91-93] This inference has been made based on the reported 5-h intestinal transit time from the stomach to the colon. However, it has also been suggested that the colon may also participate in oxalate absorption, but to a lesser extent.[93] In addition, the paracellular intestinal oxalate flux has been suggested to occur in the early segment of the small intestine largely due to the negative intestinal luminal potential and higher luminal oxalate concentration compared to the blood.[94]
Role of the Putative Anion Exchange Transporter Slc26a6
Recently, the putative anion exchange transporter Slc26a6 has been shown to be involved in intestinal oxalate transport.[82] The Slc26a6 is expressed in the apical portion of various segments of the small intestine such as the duodenum, jejunum, and ileum. It can also be found in the large intestine, but to a smaller percentage.[95] In vitro studies using the Ussing chamber technique demonstrated defective net oxalate secretion in mice with a targeted inactivation of the Slc26a6.[96] Moreover, in vivo studies in the Slc26a6-null mice on a controlled oxalate diet reported high urinary oxalate excretion, increased plasma oxalate concentration, and decreased fecal oxalate excretion.[96] The differences in urinary oxalate excretion, plasma oxalate concentration, and fecal oxalate excretion were abolished following a 7-day equilibration on an oxalate-free diet. These findings suggest that the reduction of net oxalate secretion in Slc26a6-null mice increases net oxalate absorption, raising plasma oxalate concentrations and consequently raising urinary oxalate excretion. This study concluded that the Slc26a6 anion exchanger has a key function in urinary oxalate excretion.[96] These results were also associated with bladder stones and Yasue-positive crystals in the kidney. Staining of the kidney specimen with Yasue stain demonstrated evidence of birefringent crystal deposits in the luminal cortical collecting ducts and, to a minimal extent, in the inner medullary collecting ducts (IMCD). Calcium oxalate stones were found in the renal pelvis and bladder. The renal tubular epithelial cells were surrounded by lymphocytic infiltration and distorted morphology. However, unlike with the kidneys in idiopathic calcium oxalate stone formers, no abnormality was found in the medullary interstitial space.
Role of O. formigenes
Among many other bacteria including Eubacterium lentum, Enterococcus faecalis, Lactobacillus, Streptococcus thermophilus, and Bifidobacterium infantis, OF have been reported to degrade oxalate.[94] OF was first isolated in ruminates [97] and has since been found in many animal species as well as in humans.[98] However, OF is not found in infancy. The bowel becomes colonized with this bacterium at approximately 6-8 years of age. It decreases in later years and may only be found in the feces of 60-80% of the adult population.[99]
OF is a Gram-negative obligate anaerobe microorganism that primary utilizes oxalate as a source of energy for cellular biosynthesis.[100] Through this electrogenic process, oxalate enters the oxalobacter through an oxalate-formate antiporter. It then utilizes its own enzymes, formyl CoA transferase and oxalyl-CoA decarboxylase, to convert oxalate into formate and CO2.[101] In this process, one proton is utilized and creates a chemical gradiant due to the cell alkalinity. The electrochemical gradients created by these processes facilitate proton entry and ATP synthesis[101] (Figure 5).
Figure 5.
Oxalate catabolism and energy conservation in Oxalobacter formigenes.
The clinical importance of OF colonization is primarily suggested for patients with recurrent calcium oxalate nephrolithiasis,[102-104] in patients with enteric hyperoxaluria,[105,106] and in those with cystic fibrosis.[107] Studies in patients with urolithiasis and cystic fibrosis have shown that the prolonged use of antibiotics may abrogate the bowel colonization of OF and may irreversibly destroy these bacteria. Very recently, a case-control study of 274 patients with recurrent calcium oxalate stones and 259 normal subjects matched for age and gender displayed that the prevalence of OF was significantly lower in the stone formers. In this study, 17% of stone formers were positive for OF vs 38% of normal subjects. This relationship persisted with age, gender, race, ethnic background, region, and antibiotic use[108] (Figure 6).
Figure 6.
Oxalobacter formigenes in stool among patients with recurrent calcium oxalate kidney stones and non-stone formers. Previously published as a modification of information obtained from Kaufman et al. [108]
Oxalobacter formigenes in stool among patients with recurrent calcium oxalate kidney stones and non-stone formers
The colonization of OF may be regulated by dietary oxalate intake. This has been shown in animal models where a significant decrease in urinary oxalate resulted from the administration or in the upregulation of OF colonization.[102,109] It has been recently shown in rodents, ex vivo using the Ussing chamber method, that the role of OF in oxalate metabolism is not solely dependent on its capacity to degrade intestinal luminal oxalate or to lower mucosal to serosal oxalate flux, but also on its capacity to stimulate the net intestinal oxalate secretion.[110] Given this experimental design, increased net oxalate secretion cannot be explained by transepithelial oxalate gradients. One may speculate that OF interacts with mucosal epithelial cells, enhancing luminal oxalate secretion.
The result of these animal experiments has been recently conveyed into human diseases.[80] One such study conducted in patients with type 1 primary hyperoxaluria, in subjects with normal renal function, and in patients with chronic renal insufficiency, reported the reduction of urinary oxalate that ensued following the oral administration of OF.[80] The major drawbacks of the use of OF are (1) the lack of large, long-term, controlled studies in calcium oxalate kidney stone formers and in subjects with enteric hyperoxaluria, such as patients with cystic fibrosis or those following a gastrointestinal bypass procedure; (2) the variable response to OF administration; and (C) OF's short life span on the complete utilization of its primary nutrient source, oxalate. Future long-term studies and the development of target drugs to either upregulate the intestinal secretion of oxalate by stimulating Slc26a6 provide the enzyme products of OF to allow for its persistent oxalate-degrading capacity, provide engineered bacteria that are not entirely dependent on oxalate as a substrate for nutrients, or contain a luminally active agent that binds intestinal luminal oxalate content are necessary in overcoming these deficiencies.
Renal Excretion
The kidney has an important function in oxalate excretion. With impaired kidney function, plasma oxalate concentrations progressively rise and result in kidney damage. Eventually, with further impairment, there is a robust spike in plasma oxalate concentration that exceeds its saturation in the blood and thereby increases the risk of systemic tissue oxalate deposition. It has been recently demonstrated that Slc26a6 is also expressed in the apical portion of the proximal renal tubule[111] and influences the activity of various apical anion exchangers.[112] In Slc26a6-null mice, it has been shown that Cl-oxalate exchange activity is completely inhibited, and the activity of Cl-/OH- and Cl-/HCO3 is significantly diminished. However, the significance of this putative anion transporter in calcium oxalate stone formation has not been fully elucidated.
Randall's Plaque in the Pathogenesis of Calcium Oxalate Stones
Several mechanisms have been proposed for the formation of calcium stones. First, it has been suggested that the increased supersaturation of stone-forming salts are responsible for the process of homogenous nucleation in the lumen of the nephron. This process, followed by crystal growth, ultimately results in an obstruction in the distal nephron. Second, it has been suggested that crystal forms in the renal tubular lumen adhere to the luminal renal tubular cells. This adhesion then induces renal cell injury resulting in the formation of a fixed nuclei that interacts with the supersaturated urinary environment and results in crystal growth. These processes both lead to nephron obstruction and consequently result in intratubular calcification.[113] However, the theory of fixed and free crystal growth attachments in the nephron has not been fully described as a mechanism of kidney stone formation. As occurs in intestinal bypass and cystine patients, if an intraluminal crystal plug attachment occurs at the opening of the Bellini duct, it is possible that this mineral plug can protrude to the minor calyx, resulting in stone growth.
Dr Alexander Randall was the first to argue that intraluminal plugging is an infrequent occurrence in kidney stone formers.[114] Conversely, he suggested that interstitial calcium phosphate deposits are initial niduses that anchor urinary crystals beneath the normal uroepithelial cells of the renal papilla. The erosion of the overlying uroepithelium exposes these deposits, referred to as plaques, to the supersaturated urine that then propagate calcium oxalate stones. He found these lesions to be interstitial as opposed to intraluminal, and without any inflammatory reactions. He also showed these deposits to be mainly found beneath the tubular basement membrane and in the interstitial collagen. Randall's hypothesis was primarily disputed since it was carried out in cadaveric kidney specimens and not in a targeted kidney stone-forming population.[114] His major discovery, however, was a small stone propagated in the renal pelvis that was attached to a calcium plaque found in the papillae of the kidney.
Characteristics of the Interstitial Plaques
Randall's initial observations were recently followed with the development of modern techniques for determining mineral composition. These techniques have been used to characterize the nature of crystals attached to these plaques and to develop novel techniques to visualize Randall's plaque in vivo in patients with nephrolithiasis.[115,116] An analysis of over 5000 stones showed the main mineral composition of interstitial plaque to be mainly carbapatite. However, amorphous carbonated calcium phosphate, dosium hydrogen urate, and UA were found to a smaller extent.[117] Another study utilizing µ-CT determined that apatite crystal surrounded by calcium oxalate was the main mineral composition of Randall's plaque.[118]
It was first shown that Randall's plaques occur more frequently in patients with kidney stones as compared to non-stone formers undergoing an endoscopic evaluation.[119] Furthermore, a relationship was found between metabolic abnormalities in patients with calcium stones and the number of plaques.[120] The result of this study was reached using digital video and endoscopic techniques to estimate accurately the extent of Randall's plaque in both calcium stone-forming and non-stone-forming subjects.[121] In this study, the main biochemical profiles correlating with the formation of interstitial plaque were urinary volume, urinary pH, and urinary calcium excretion. Higher urinary calcium and lower urinary volume showed an increased coverage of the renal papilla with plaque. This study supports a mechanistic relationship between water reabsorption in the renal medulla and papilla with plaque formation. In addition, a separate retrospective study, using nephroscopic papillary mapping with representative still images and Moving Pictures Expert Group movies in 13 calcium oxalate kidney stone formers, determined the percent of plaque coverage to be directly correlated with the number of kidney stones formed.[122]
Localization of Randall's Plaque
The basement membrane of thin descending loops of Henle is the principal site of Randall's plaque localization.[115] The thin descending limb basement membrane is made up of collagen and mucopolysaccharides, which attract calcium and phosphate ions.[123] Once attracted to this protein matrix, the crystallization processes begins. In the interaction following, calcium phosphate crystals grow and propagate to the surrounding collagen and mucopolysaccharide-rich renal interstitium.[124] This complex then makes its way through the urothelium and serves as a nidus for calcium oxalate deposition, ultimately resulting in calcium oxalate kidney stone formation. Randall's plaque has only been localized in the basement membranes and in the interstitium. It has never been found in the tubular lumen within epithelial cells or vessels. Within the basement membrane, this plaque consists of coated particles of overlying regions of crystalline material and organic matrix[116] (Figure 7).
Figure 7.
Sites and characteristics of crystal deposition. A transmission electron micrograph showing a crystalline structure composed of concentric layers of crystalline material (light) and matrix protein (dark). Previously published in Evan et al. [127]
Sites and characteristics of crystal deposition. A transmission electron micrograph showing a crystalline structure composed of concentric layers of crystalline material (light) and matrix protein (dark)
Mechanism of Plaque Formation
The mechanism of interstitial plaque formation has not been fully elucidated. Our limitations in this area are based on the lack of availability of an animal model that mimics this human disease. A few clinical studies have suggested a correlation between urine volume, urinary calcium, and severity of stone disease with the fraction of papillary interstitium covered by Randall's plaque.[119-122] Although this link is not causal, however, it indicates some correlation between plaque formation and kidney stone disease in idiopathic hypercalciuric patients. It is plausible to propose that plaque formation in the thin descending limb of Henle occurs because of an increase in interstitial calcium and phosphate concentration as well as an increase in renal papillary osmolality as a result of water reabsorption in this nephron segmet.[125] Moreover, whether increasing interstitial fluid pH affects the abundance of plaque formation has been suggested but has never been fully explored.[116]
Absence of Randall's Plaque
Following Gastric Bypass Surgery
Hyperoxaluria and calcium oxalate stones are a common occurrence in patients following intestinal bypass surgery due to morbid obesity.[116,126] In these subjects, there is no plaque observed in the renal papilla. However, crystal aggregates are found in the IMCD. Moreover, in contrast to conditions in idiopathic calcium oxalate stone formers, there is evidence of renal IMCD cell injury, interstitial fibrosis, and inflammation adjacent to the crystal aggregates. The IMCD crystal aggregates are usually composed of apatite crystals. The deposition of these apatite crystals occurs despite an acidic urinary environment, implying that tubular pH may be different from the final urinary pH.[116,126]
Brushite Stone Formers
In brushite stone formers, similar to calcium oxalate stone formers following gastric bypass surgery, there is evidence of cell injury and interstitial fibrosis in the IMCD adjacent to apatite crystal deposits. Although brushite stone formers, much like idiopathic calcium oxalate stone formers, have plaque in the renal papilla, the stones have not been shown to attach to the plaque.[127] This may be, in part, due to clinical and technical difficulties as the high burden of brushite stones may affect the structural integrity of the renal papillae, making it difficult to detect smaller stones that may be attached to the plaque. In addition, the extent of Randall's plaque is minimal in brushite stone formers so attached stones are not commonly anticipated. One important predisposition to distortion of structural integrity of the renal papillae in these subjects may be acquired and is related to the number of shockwave lithotripsy in this population.[127,128]
The Role of Renal Tubular Crystal Retention
Although the crystallization process is necessary, it alone is not sufficient for the formation of kidney stones. Three decades ago, it was initially proposed that the accumulation of crystals in the renal calices are involved in the pathogenesis of nephrolithiasis.[129] It was further hypothesized that tubular nephrocalcinosis is preceded by renal stone formation. This scheme does not refute Randall's theory that interstitial nephrocalcinosis and plaque formation are precursors for the development of kidney stones.[114] However, it has become increasingly recognized that both mechanisms may be significant in the formation of kidney stones.[130,131] The further elaboration of these two pathogenic pathways is important as stone formation may occur in the absence of plaque in the kidney.[132] Furthermore, experimental evidence has suggested that crystal binding to the surface of the regenerating/redifferentiating renal tubular cell is regulated by the expression of a number of luminal membrane molecules, including hyaluronic acid, osteopontin, their transmembrane receptor protein CD44, and p38 mitogen-activated protein kinase.[130,133-137] In addition, several other molecules expressed in the renal tubular apical membrane such as Annexin-II[138] and an acidic fragment of nucleolin-related protein[139] have been proposed as active binding protein regions for calcium oxalate crystals. The clinical implications of this experimental evidence are progressively emerging in the field.
In addition, the increased incidence of tubular nephrocalcinosis in preterm infants may possibly occur from exposure of differentiating renal tubular epithelial cells following crystalluria caused by furosemide treatment.[140,141] Moreover, tubular nephrocalcinosis has been seen in a large number of renal allografts, suggesting that ischemic injury resulting in increased expression of hyaluronic acid and osteopontin precedes crystal retention.[142,143] From the above discussion, one can conclude that under normal conditions, crystals do not adhere to renal tubular epithelial cells and are readily excreted in the urine. However, with antecedent renal tubular epithelial damage and during the process of renal tubular repair,[144-146] specific crystal-binding proteins are expressed at the apical surface of the renal epithelial cell, predisposing crystal adhesion and possibly stone formation. Whether this process has a pathogenic function in many clinical conditions associated with tubular nephrocalcinosis and nephrolithiasis deserves intense future investigation.[130]
Conclusion
Kidney stone disease remains a major public health burden. Its pathophysiologic mechanisms are complex, mainly because it is a polygenic disorder, and it involves an intricate interaction between the gut, kidney, and bone. In addition, an exact animal model to recapitulate the human disease has not yet been defined. Despite these limitations, our comprehension of UA stone formation's link to insulin resistance and renal lipotoxicity, the underlying mechanisms of intestinal oxalate transport, the role of renal papillary plaque in idiopathic calcium oxalate stone formation and renal tubular crystal bindings, has advanced significantly over the past decade. These elucidations can potentially lead us to the development of novel drugs targeting basic metabolic abnormalities that abrogate stone formation.
HR>Top -
Balancing Diuretic Therapy in Heart Failure: Loop Diuretics, Thiazides, and Aldosterone Antagonists
Sara Paul, RN, MSN, FNP
Posted: 01/14/2003; © 2002 Le Jacq Communications, Inc.
Introduction
In heart failure, sodium is retained by the kidneys despite increases in extracellular volume. There is activation of renin secretion, which culminates in the production of angiotensin II, causing vasoconstriction and aldosterone secretion. These synergistically produce an increase in tubular reabsorption of sodium and water. Diuretics are the mainstay of symptomatic treatment to remove excess extracellular fluid in heart failure. Diuretics that affect the ascending loop of Henle are most commonly used. Thiazide diuretics promote a much greater natriuretic effect when combined with a loop diuretic in patients with refractory edema. Recently, spironolactone, an aldosterone receptor blocking agent, has been recommended to attenuate some of the neurohormonal effects of heart failure. Regardless of the diuretic, patients need to be counseled on the importance of avoiding sodium in their diet
Sodium Retention and Edema in Heart Failure
Some fundamental features of extracellular volume overload in heart failure have been known and well documented in medical literature for decades. At the turn of the century, Starling[1] noted that blood volume was more than likely to be increased in patients with edema. Over 50 years ago, Starr et al.[2,3] showed that edema occurs only when venous pressure is elevated, and Warren and Stead[4] made the observation that an increase in weight precedes an increase in venous pressure. In 1946, Merrill[5,6] noted that weight gain in patients with congestive heart failure (CHF) was the result of salt and water retention by the kidney due to low renal blood flow.
The physiology behind these observations remains the same today. In normal subjects, intravascular volume (plasma volume) and interstitial space, which together constitute the extracellular volume (ECV), remain constant, despite altered sodium and water intake. Since sodium constitutes more than 90% of the total cations of the extracellular fluid, the body content of sodium is the primary determinant of ECV. Control of ECV is dependent upon sodium balance, which is controlled by the kidneys. If ECV is increased in a normal person, the kidneys excrete extra salt and water. In CHF, however, sodium is retained by the kidneys despite increases in ECV.
Sodium and water retention is not necessarily due to decreased cardiac output, since there are high output states that also cause edema, such as severe anemia, thyrotoxicosis, chronic arteriovenous fistula, Paget's disease, and beriberi.[7] Furthermore, sodium retention is not caused by decreased blood volume, since blood volume is increased with CHF, not decreased. It is clear, however, that salt and water retention in CHF is at least in part due to the body's attempt to maintain a normal arterial blood pressure.
Data from a group of patients with untreated severe left ventricular dysfunction gave insight into the physiology of edema in CHF. As would be expected, these patients had resting tachycardia and increased right- and left-sided filling pressures.[8] Despite a 50% reduction in cardiac output, arterial blood pressure was normal due to increased systemic vascular resistance. Total body water was increased 16% above normal, almost all of which was in the extracellular space. Plasma volume increased by 34% and total body exchangeable sodium increased 37%. Effective renal plasma flow was severely decreased to 30% of normal due to severe renal vasoconstriction. Glomerular filtration rate was reduced to a lesser extent, suggesting greater efferent than afferent arteriolar vasoconstriction. Plasma norepinephrine was increased more than six times above normal and plasma renin activity was nine times normal. Aldosterone was increased six times above normal and plasma atrial natriuretic peptide was increased to 15 times normal. It appears, therefore, that the sodium-retaining effects of the catecholamines and the renin-angiotensin system prevail over the natriuretic effects of atrial natriuretic peptide in advanced CHF. It is noteworthy that the marked increase in plasma volume did not mitigate the ongoing activation of neurohormonal sodium-retaining mechanisms.
Diminished renal blood flow is thought to be the stimulus for activation of renin secretion in heart failure, which culminates in the production of angiotensin II, causing vasoconstriction and aldosterone secretion. Angiotensin II and aldosterone synergistically produce an increase in tubular reabsorption of sodium and water. Angiotensin II and aldosterone exert direct myocardial effects, leading to ventricular hypertrophy and cardiac fibrosis.
Diuretic Therapy in CHF
Loop Diuretics
Over time, the retention of sodium leads to crackles, peripheral edema, hepatomegaly with ascites, increased blood volume, and increased cardiac filling pressures. Although diuretics do not directly treat the pathologic changes that occur with heart failure, they are the mainstay of symptomatic treatment to remove excess extracellular fluid, thus alleviating pulmonary and peripheral edema. Diuretics that exert their primary action on the thick ascending loop of Henle are most commonly used. Most of the filtered sodium is reabsorbed in the proximal tubule (60%-65%) and the loop of Henle (20%). At maximum dose, loop diuretics can lead to excretion of up to 20%-25% of filtered sodium.[9,10] The main loop diuretics used in the United States are furosemide, bu-metanide, and torsemide. Thiazide diuretics, such as metolazone, are less potent than loop diuretics and are therefore less useful when used alone in CHF patients.
A short-acting diuretic such as furosemide produces significant natriuresis during the 6-hour period following drug administration. However, sodium excretion falls to very low levels during the remaining 18 hours of the day because the volume depletion from the furosemide leads to activation of sodium-retaining mechanisms, such as the renin-angiotensin-aldosterone system and the sympathetic nervous system. The activated neurohormones angiotensin II, aldosterone, and norepinephrine promote tubular sodium reabsorption,[11-13] thus contributing to rebound sodium retention. Consequently, if a patient consumes a high-sodium diet, there is no net loss of sodium, despite diuretic therapy. Solutions to this problem include eating a low-sodium diet, taking the diuretic twice a day, or increasing the dose of diuretic. Maximum diuresis will occur with the first daily dose of diuretic, but activation of sodium-retaining mechanisms can limit the response to the second dose. Concomitant use of an angiotensin-converting enzyme (ACE) inhibitor will decrease the response and activation of the renin-angiotensin system, which may increase the diuretic effect of a second daily dose.
It is important to note that diuretic therapy alone is not sufficient to control sodium and fluid retention in patients with CHF. Dietary reduction in sodium is imperative to promote diuresis and prevent accumulation of extracellular fluid. Patients must be educated about the effects of sodium in heart failure and they must learn to calculate their intake of sodium, keeping the total intake below 4000 mg per day. If they have moderate to severe heart failure with pulmonary or peripheral edema, they may need to reduce their sodium intake even further to 2000-3000 mg per day.
The reduction in intracardiac pressures that is induced by diuretics lowers intravascular pressure, thereby permitting mobilization of edema fluid from the interstitium. Edema fluid is mobilized diffusely from tissues and maintains the intravascular volume, thus supporting hemodynamics, even with rapid diuresis. However, once edema has resolved, this defense against intravascular volume depletion is not available. Lowering of the pulmonary capillary wedge pressure to the optimal range (15-18 mm Hg) produces very little, if any, decrease in cardiac index (Figure: Starling curve, point B to point C), but an excessive decrease in preload will lower the cardiac index (Figure: point C to point A). This diuretic-induced reduction in cardiac filling pressures can lead to a decline in cardiac output and activation of the renin-angiotensin system. Again, in this situation, the use of an ACE inhibitor will decrease activation of the renin-angiotensin system, but does not primarily increase the cardiac output if over-diuresis is induced.
Figure.
Diuretic effects on cardiac index and pulmonary capillary wedge pressure in the presence of left ventricular dysfunction. Modified with permission of the McGraw-Hill Co. DiPiro JT, Talbert RL, Hayes PE, et al., eds. Pharmacotherapy: A Pathophysiologic Approach. 2nd. ed. Norwalk, CT: Appleton & Lange; 1993:169.
Figure.
Diuretic effects on cardiac index and pulmonary capillary wedge pressure in the presence of left ventricular dysfunction. Modified with permission of the McGraw-Hill Co. DiPiro JT, Talbert RL, Hayes PE, et al., eds. Pharmacotherapy: A Pathophysiologic Approach. 2nd. ed. Norwalk, CT: Appleton & Lange; 1993:169.
Diuretic effects on cardiac index and pulmonary capillary wedge pressure in the presence of left ventricular dysfunction
In pulmonary edema due to acute myocardial infarction, intravenous furosemide causes transient venodilation resulting in a fall in cardiac filling pressures and decreased pulmonary congestion prior to the onset of diuresis.[14] Loop diuretics increase the production of vasodilator prostaglandins; thus, the venodilator response can be blocked in the presence of nonsteroidal anti-inflammatory drugs (NSAIDs).[15-18] Prostaglandins protect the glomerular microcirculation by promoting vasodilation in the afferent arterioles, thereby promoting sodium excretion.[19,20] Consequently, it is important to counsel diuretic-treated patients to avoid the use of NSAIDs for pain relief.[21,22]
In patients with advanced, chronic CHF and chronic renin hypersecretion, intravenous loop diuretics may cause an acute increase in plasma renin and norepinephrine levels, leading to arteriolar vasoconstriction and a rise in systemic blood pressure. This increase in afterload can transiently decrease cardiac output and increase pulmonary capillary wedge pressure, with possible worsening of dyspnea. These changes are usually reversed within 1 hour once diuresis begins and the release of vasoconstrictors decreases.[23]
Electrolyte imbalance, particularly hypokalemia, is the most common adverse effect of loop diuretics. Through this mechanism, diuretics may increase mortality (especially arrhythmic deaths). In the Studies of Left Ventricular Dysfunction (SOLVD),[24] diuretic use was associated with a higher incidence of overall mortality, cardiovascular deaths, and arrhythmic or sudden deaths as compared with non-use of diuretics at baseline. Hypokalemia was thought to be the mechanism of arrhythmia mortality. Other adverse effects include hyperuricemia, which could precipitate an acute episode of gout. Ototoxicity and glucose intolerance are rare side effects.
The bioavailability of oral furosemide is only about 50%, but there is wide variability among patients.[25] The dose should be governed by diuretic response. Generally, the oral dose of furosemide is twice that of the intravenous dose because of incomplete absorption. Decreased intestinal perfusion and mucosal edema may markedly slow the rate of drug absorption and rate of drug delivery to the kidney.[26-29] This is usually reversed when some edema fluid is removed.[27] Bumetanide and torsemide have better oral bioavailability than furosemide, and therefore there is a more predictable relationship between intravenous and oral doses with these agents.
Patients with advanced heart failure become less responsive to conventional oral doses of loop diuretics due to decreased renal perfusion (decreased tubular secretion of the diuretic and reduced filtered load of sodium) and increases in sodium-retaining hormones (angiotensin II and aldosterone).[25] Resistance to diuretics may occur after chronic use. Patients are considered "diuretic-resistant" if they have progressive edema despite increased oral or intravenous diuretic doses. This occurs in 20%-30% of patients with severe left ventricular dysfunction. Persistent fluid retention can be caused by a number of factors ( Table I ).[10,21,22] Suggestions to overcome diuretic resistance include giving the diuretic via the intravenous route (bolus or infusion), optimizing the dosage, or using combination therapy with a thiazide diuretic to block sodium reabsorption at multiple sites. Alleviating factors that contribute to fluid retention, such as a high-sodium diet and use of NSAIDs, may promote a diuretic response.
Bolus intravenous administration of furosemide has a short-acting effect similar to that of oral furosemide and is associated with initially high and then low rates of diuretic excretion. A continuous infusion of furosemide may have a greater net sodium excretion compared to intermittent bolus administration because a constant infusion maintains an optimal rate of drug excretion.[25,30,31] Doses of 20-40 mg per hour of furosemide, 1-2 mg per hour bu-metanide or 10-20 mg per hour torsemide may provide better diuresis than individual bolus doses.[25]
Posture can affect the patient's response to a diuretic. Patients with CHF have enhanced renal perfusion when supine, and therefore better diuretic delivery to the kidneys. Hence, supine positioning can increase the diuretic response as much as two-fold.[32] As a last resort, hemofiltration can be utilized in refractory patients who do not respond to diuretic therapy. Excess fluid can be removed by ultrafiltration of the blood through a semipermeable dialysis membrane. Occasionally, ultrafiltration can restore diuretic responsiveness in previously refractory patients.
Thiazides
When a patient requires 240 mg per day of furosemide, it is better to add a thiazide diuretic, such as metolazone, than to continue to increase the patient's furosemide dose. Thiazide diuretics inhibit sodium transport in the distal tubule, although some agents, such as metolazone, may exert some proximal tubule activity as well, perhaps by blocking carbonic anhydrase. These segments normally reabsorb less of the filtered load than the loop of Henle; therefore, thiazides alone are less potent than loop diuretics. One theory suggests that by blocking the proximal tubule with metolazone, more sodium is delivered to the loop of Henle, resulting in a much greater natriuretic effect than when a loop diuretic is given alone.[33-35] More importantly, thiazides can block compensatory responses by the distal convoluted tubule to increased sodium delivery from the loop of Henle. Thiazide diuretics can be given at the same time as a loop diuretic when the two drugs are given by the oral route. Unfortunately, intravenous metolazone is not available. When a thiazide is given orally and a loop diuretic is given intravenously, the thiazide should be given 30-60 minutes in advance. Patients should be closely monitored when given combination diuretic therapy, since it can induce a profound diuresis, with electrolyte and volume depletion.
Aldosterone Receptor Blockers. In the presence of neurohormonal activation, angiotensin II causes aldosterone production in the adrenal cortex, which acts on the cortical collecting tubules to conserve sodium. Aldosterone may induce perivascular and interstitial cardiac fibrosis that may reduce systolic function, increase cardiac stiffness, and thereby impair diastolic function, generating heterogeneous intracardiac conduction defects with potential for serious re-entrant arrhythmias. Aldosterone may also increase vulnerability to serious arrhythmias by inhibiting cardiac noradrenaline reuptake, impairing baroreflex-mediated heart rate variability, augmenting sympathetic activity, inhibiting parasympathetic flow, and impairing arterial compliance. Aldosterone also promotes potassium and magnesium depletion, which is potentially proarrhythmic.
Aldosterone was originally thought to be blocked by ACE inhibitors. However, it is now known that usual doses of ACE inhibitors do not completely suppress aldosterone production. Furthermore, there may be an "escape" of aldosterone, even when ACE activity is inhibited. Up to 40% of patients on ACE inhibitors have elevated serum concentrations of aldosterone.[36] Spironolactone (Aldactone, an aldosterone receptor blocker, can be used in the presence of heart failure to diminish the degree of potassium loss or to increase net diuresis in patients with refractory edema. By competing with aldosterone for receptor sites in distal renal tubules, spironolactone increases sodium chloride and water excretion while conserving potassium and hydrogen ions. The inhibition of sodium reabsorption leads to reduced potassium excretion. Potassium-sparing diuretics have a relatively weak natriuretic effect.
Recently, spironolactone has been recommended to attenuate some of the neurohormonal effects of heart failure. The Randomized Aldactone Evaluation Study (RALES) was designed to determine the effect of low-dose Aldactone (mean dose, 26 mg daily) on survival in severely symptomatic (New York Heart Association class IV) heart failure patients taking an ACE inhibitor, loop diuretic, and digoxin.[37] A total of 1663 heart failure patients were enrolled. The ejection fraction in these patients was less than 35% and the etiology of heart failure was from ischemic and nonischemic causes. All-cause mortality was the primary end point. There were 386 deaths in the placebo group vs. 284 deaths in the treatment group. Frequency of hospitalization for heart failure was 35% lower in the treatment group and greater improvement was noted in New York Heart Association class during follow-up.
The potential benefit of aldosterone antagonists in patients with milder heart failure cannot be determined from this study. Furthermore, patients with serum potassium greater than 5.0 were excluded, as well as patients with renal insufficiency. Relatively few patients in either group (about 10%) were treated with b blockers. Current recommendations state that Aldactone should be given at a low dose (12.5-25.0 mg daily) and should be considered for patients receiving standard therapy who have severe heart failure caused by left ventricular systolic dysfunction.[38]
Potassium-sparing diuretics are contraindicated in the presence of hyperkalemia and renal failure. Patients should not take potassium supplements. Aldactone should be used with caution in patients with hyponatremia, renal insufficiency, or hepatic disease. Adverse effects are listed in Table II .
In summary, loop diuretics are the mainstay of diuretic therapy in CHF. One must consider the physiologic effects, both positive and negative, when administering these drugs. If loop diuretics lose effectiveness or the patient develops refractory edema, adding a thiazide diuretic may help overcome diuretic resistance through a different mechanism of action. In recent years, aldosterone antagonists have been found to improve outcomes in patients with moderate to severe heart failure who are already on an appropriate medication regimen. Regardless of the diuretic, patients need to be counseled on the importance of avoiding sodium in their diet. Medication alone cannot overcome the neurohormonal activation associated with heart failure. While diuretics can alleviate the symptoms associated with excess extracellular fluid, it is important to monitor patients on diuretic therapy to prevent serious, potentially life-threatening complications.
Top
From American Journal of Lifestyle Medicine
Themed Review: Lifestyle Treatment of the Metabolic Syndrome
Peter M. Janiszewski; Travis J. Saunders; Robert Ross
Posted: 04/14/2008; Am J Lifestyle Med. 2008;2(2):99-108. © 2008 Sage Publications, Inc.
Themed Review: Lifestyle Treatment of the Metabolic Syndrome
Abstract and Introduction
Abstract
The metabolic syndrome is a clustering of metabolic risk factors including abdominal obesity, dysfunctional glucose metabolism, dyslipidemia, and elevated blood pressure. Approximately 1 in 4 Americans currently has the metabolic syndrome and are thus at an elevated risk of cardiovascular disease, type 2 diabetes, and mortality. Leading health authorities recommend lifestyle modification consisting of exercise and caloric restriction for treatment and prevention of the metabolic syndrome. The purpose of this report is to review the evidence that considers lifestyle modification as a treatment strategy for the metabolic syndrome. The influence of lifestyle modification on abdominal obesity, dysfunctional glucose metabolism, dyslipidemia, and elevated blood pressure is considered. Findings suggest that interventions consisting of exercise and/or caloric restriction are associated with improvement in all components of the metabolic syndrome, although the magnitude of this effect varies according to the specific component studied and additional factors such as baseline values. The evidence presented supports the promotion of lifestyle modification as an efficacious strategy for the treatment of the metabolic syndrome.
Introduction
The notion of a common clustering of metabolic risk factors, now clinically recognized as the metabolic syndrome, was described as early as 1923.[1] However, it was not until Reaven's 1988 Banting lecture[2] that the constellation of insulin resistance, dyslipidemia, and hypertension was first recognized as a unique clinical entity. The seminal observations of Vague[3] and others[4,5] regarding central body fat distribution and disease risk led to the subsequent inclusion of abdominal obesity as an additional component of the syndrome.[6] Although the pathogenesis of the metabolic syndrome remains elusive, both insulin resistance[2] and abdominal, specifically visceral, adiposity[7] have been proposed as causative factors in the development of the condition.
Various organizations, including the World Health Organization (WHO),[8] the National Cholesterol Education Program,[9] the International Diabetes Federation,[10] and others,[11-13] have developed unique definitions of the metabolic syndrome. Although the criteria and precise threshold values identifying the metabolic syndrome vary between organizations, they agree on 4 fundamental components: abdominal obesity, dysfunctional glucose metabolism, dyslipidemia, and elevated blood pressure.[8-13]
Although the prevalence of the metabolic syndrome is largely definition dependent,[14] according to National Cholesterol Education Program criteria, the metabolic syndrome is estimated to affect approximately a quarter of the US population and is particularly prevalent among older adults.[15] Given that the metabolic syndrome is strongly associated with risk of cardiovascular disease,[16] type 2 diabetes,[17] and mortality,[18] strategies for the prevention and treatment of the condition are needed. Leading health organizations[9,10,13,19] recommend lifestyle modification as the primary treatment strategy for the metabolic syndrome. Specifically, these guidelines target the reduction of total and abdominal obesity levels through increased physical activity and caloric restriction.[10,13] The purpose of the present review is to elucidate the effects of physical activity and caloric restriction on the major components of the metabolic syndrome: abdominal obesity, dysfunctional glucose metabolism, dyslipidemia, and elevated blood pressure. Alterations in dietary composition, which have also been suggested in the management of the metabolic syndrome,[9,10,13] have been examined in a number of excellent reviews[20-22] and will not be considered here.
Abdominal Obesity
It has been suggested that abdominal obesity may represent a central component of the metabolic syndrome, one that is mechanistically linked to other individual risk factors.[7] Since the original clinical definition of the metabolic syndrome proposed by the WHO in 1999,[8] subsequent definitions from other organizations[10,11,13] have all included a measure of abdominal obesity. Furthermore, the most recent guidelines proposed by the International Diabetes Federation have made abdominal obesity a requirement for the diagnosis of the metabolic syndrome, thereby further highlighting the importance of abdominal obesity in the condition.[10] Although the original WHO guidelines suggested the use of waist-to-hip ratio, most organizations currently advocate for waist circumference measurement to define abdominal obesity.[9-11] Indeed, it has recently been shown that abdominal obesity, as measured using waist circumference, significantly predicts risk of morbidity independent of commonly obtained metabolic risk factors and body mass index.[23]
Effect of Exercise
A recent comprehensive review[24] suggested that chronic exercise is generally associated with reduction in waist circumference and that the degree of waist circumference reduction achieved is linearly related to the magnitude of weight loss. Not surprisingly, however, a considerable interindividual variation in the magnitude of change in waist circumference (±40%) has previously been reported.[25] Nonetheless, those studies prescribing the greatest amount of physical activity (approximately 60 min/d), and thus inducing the greatest negative energy balance and weight reduction (approximately 8.0 kg), generally report the largest reductions in waist circumference (approximately 7.0 cm), independent of gender.[25,26] Predictably, more modest exercise prescriptions (approximately 30 min/d) lead to smaller reductions in waist circumference (1.0-3.0 cm).[27,28]
A limited number of trials have also considered whether a dose-response relationship exists between dose of exercise and corresponding reduction in waist circumference. In combination, results from the Studies of Targeted Risk Reduction Interventions Through Defined Exercise[29] and the Dose-Response to Exercise in Postmenopausal Women[30] study suggest that exercise is consistently associated with significant reductions in waist circumference, although neither study was able to discern a dose-response relationship between exercise dose and reductions in waist circumference in overweight men and women.[29] It is apparent that additional studies are needed to further elucidate the effect of exercise duration and intensity on reductions in abdominal obesity, in particular, waist circumference.
The association between abdominal obesity and metabolic risk may be explained by an excess accumulation of fat in the visceral depot.[7] Although visceral fat cannot be readily measured in clinic, waist circumference provides the best indirect measure of visceral fat[31] as well as visceral fat change in response to intervention.[24] The ability of waist circumference to provide a proxy measure of visceral fat is important given that independent of subcutaneous abdominal fat, visceral fat is a strong predictor of dyslipidemia,[32] insulin resistance,[33] hypertension,[34] cardiovascular disease,[35] type 2 diabetes,[36] and mortality.[37] According to recent reviews,[24,38] mirroring the effects on waist circumference discussed above, exercise training is consistently associated with reductions in visceral fat. As expected, the greatest exercise dose induces the greatest energy deficit, leading to greater weight loss and, accordingly, greater reduction in visceral fat. For example, approximately 60 minutes of daily exercise over 3 months is associated with a 1.0 kg (approximately 30%) reduction in visceral fat and a 7.0 cm reduction in waist circumference concurrent with an 8.0 kg weight loss in obese men and women.[25,26] On the other hand, approximately 20 to 25 minutes of daily exercise is reported to reduce visceral fat by only 6% to 10%, which corresponded with a modest reduction in waist circumference (1.0-3.0 cm) and weight (1.4-1.8 kg) in overweight women[27] and obese women with diabetes.[39]
It is important to note that regular exercise can lead to marked reduction in abdominal obesity even when body weight is unchanged. For example, several studies have specifically examined the effect of exercise on abdominal adiposity when body weight is maintained by study design.[25,26,40] The primary findings suggest that in obese Caucasian men and women, as well as men with type 2 diabetes, significant decrements in waist circumference (2.0-3.0 cm) and visceral fat (approximately 15%) occur through exercise training despite little or no change in body weight. Similar findings have been reported in studies of type 2 diabetics[41,42] and nonobese premenopausal women.[43] As a key caveat, exercisers who lose weight generally have greater reductions in waist circumference and abdominal fat compared to exercisers who maintain body weight.[25,26] Thus, from a clinical perspective, exercise-induced weight loss is associated with the greatest reduction in abdominal fat. However, given the challenges associated with attaining substantial weight loss, it is equally important that abdominal adiposity may be reduced in response to minimal or no change in body weight and/or body mass index.
Effect of Caloric Restriction
A healthful, calorie-restricted diet has been the cornerstone of obesity treatment.[44] Accordingly, a number of studies have assessed the effects of chronic caloric restriction on reduction in waist circumference and visceral fat. Caloric restriction is consistently reported to decrease waist circumference in obese men and women.[25,26,45-48] For example, reducing caloric intake by 700 kcal/d for 3 months resulted in a 7 cm reduction in waist circumference concurrent with a 7.5 kg weight loss in obese men.[26] Similar results were also reported by the same research group in a sample of obese premenopausal women.[25] Combined evidence from several studies suggests that each kilogram of weight lost due to caloric restriction is associated with approximately a 1 cm decrease in waist circumference.[25,46,47] For example, although a 1000 kcal/d reduction in caloric intake resulted in a 10 kg decrease in body weight and a 10 cm reduction in waist circumference in obese men,[47] a mean reduction of approximately 300 kcal/ d resulted in decreases of 3 kg in body weight and 3 cm in waist circumference in overweight and obese men and women.[46] Accordingly, studies that report larger decreases in caloric intake generally report greater reductions in waist circumference in accordance with larger decreases in body weight.
Similarly, in reference to visceral fat, those interventions prescribing the strictest diet (very low calorie diets ranging from an intake of 800-1200 kcal/d), from 3 to 6 months in duration, tend to observe the greatest reductions in weight (10-18 kg) and visceral fat (24%-47%).[49-51] On the other hand, more moderate approaches that reduce caloric intake by 400 to 700 kcal/d see more modest reductions in visceral fat (15%-30%) and body weight (5-9 kg).[25,26,52-55] Available studies also suggest that reduction in abdominal obesity is influenced primarily by dietary adherence and the duration and severity of caloric restriction, rather than dietary composition.[46,56]
Summary
Although the preponderance of evidence suggests visceral fat is the fat depot that conveys the greatest metabolic risk and explains the association between an elevated waist circumference and morbidity, some studies have reported that subcutaneous abdominal fat is associated with health risk independent of visceral fat.[57,58] Nevertheless, chronic exercise and caloric restriction have also been shown to readily reduce subcutaneous abdominal fat.[25,26] Furthermore, although some studies suggest that women may be more resistant to reduction in abdominal obesity than are men in response to intervention,[59,60] these findings are not universal.[25] In addition, one of the only studies to assess the influence of race on abdominal fat loss suggests that Caucasians and African Americans do not differ in terms of abdominal fat reduction in response to a 20-week exercise intervention.[60] Future studies are needed to further determine the potential effect of gender and race on abdominal obesity reduction consequent to caloric restriction and/or exercise.
Dysfunctional Glucose Metabolism
Abnormalities in glucose metabolism as defined by impaired fasting glucose[8-12] or impaired glucose tolerance[8,12] are a key component of the metabolic syndrome as defined by various organizations. Both impaired fasting glucose and impaired glucose tolerance represent a prediabetic state,[61] or the transition from normal glucose metabolism to overt diabetes, and thus both have been shown to predict the risk of developing type 2 diabetes.[62,63] However, controversy exists regarding the optimal measure for predicting future diabetes risk.[64] It is recognized that more people at risk of diabetes may be identified using the combination of oral glucose tolerance test and fasting glucose levels[61,65] or even by using the oral glucose tolerance test alone.[64] However, because of the time, cost, and participant burden associated with the oral glucose tolerance test, most organizations[8-12] use fasting glucose levels to diagnose the dysfunctional glucose metabolism component of the metabolic syndrome.
Insulin resistance is well-established as the key factor in the pathogenesis of impaired glucose tolerance, impaired fasting glucose, and subsequently type 2 diabetes.[61,66,67] Indeed, reduced insulin sensitivity is the earliest detectable abnormality in the development of diabetes.[66] In addition, insulin resistance has been postulated to represent the common soil for the development of both type 2 diabetes and cardiovascular disease and has been regarded as a central tenet of the metabolic syndrome.[2,68] Unfortunately, although recommended by some organizations,[8] the measurement of insulin resistance, much like that of glucose tolerance, is impractical in clinical situations and thus not commonly performed. Nevertheless, exercise (acute and chronic),[69-75] caloric restriction,[26,48,76,77] and their combination[78] have been shown to both reduce fasting plasma glucose levels as well as improve insulin sensitivity, and thereby attenuate the risk of diabetes.
Effect of Exercise
It has been consistently shown that significant reductions in plasma glucose levels are observed in response to a single exercise session in type 2 diabetic subjects.[70,71,79] Specifically, among diabetics, 45 to 60 minutes of moderate-intensity exercise appears to lower fasting plasma glucose levels by 1 to 2 mM (approximately 10% to 20%), and this effect can last for a few days.[79] However, this effect is exclusive to those with significantly elevated plasma glucose values, as an acute exercise bout in nondiabetic subjects with relatively normal blood glucose levels has no appreciable effect on glucose values.[79]
In accordance with the improvements in fasting glucose levels, significant improvements in insulin resistance as measured by the rate of glucose clearance during a euglycemic-hyperinsulinemic clamp have been achieved after approximately 1 hour of moderate-intensity exercise in obese diabetic and normoglycemic subjects,[72] insulin-resistant subjects,[73] diabetic subjects,[74] and healthy subjects.[75] The enhanced insulin sensitivity is not only present immediately after the acute exercise bout[72] but appears to persist 20 to 48 hours after exercise.[73-75] The magnitude in improvement in insulin sensitivity after a single exercise bout ranges from 15%[75] to 24%[72]—improvements that are equivalent to those achieved through chronic pharmacological intervention.[80,81]
Several reviews[82-84] also report that regular exercise improves glucose homeostasis in men and women with diabetes. However, the improvements reported in fasting and postprandial hyperglycemia in diabetic subjects after exercise training are generally quite modest. For example, a 3-month exercise intervention consisting of 40 to 60 minutes of aerobic exercise 3 to 4 times per week in overweight diabetics resulted in a 1.5 mM (15%) reduction in fasting plasma glucose levels.[85] Furthermore, results from the HERITAGE study suggest that among subjects with the metabolic syndrome, fasting plasma glucose levels are the least likely metabolic syndrome component to improve with exercise training.[86] In addition, a meta-analysis reports that reductions in glycosylated hemoglobin (HbA1c), a marker of chronic hyperglycemia, with exercise training are very modest (approximately 1%).[82] Moreover, in healthy, nondiabetic subjects, exercise is reported to have a negligible effect on fasting glucose levels.[84] For example, Ross et al reported no significant change in fasting glucose in obese men and women with normal baseline fasting glucose after 3 to 4 months of aerobic exercise.[25,26]
However, improvements in insulin sensitivity after chronic exercise are reported to be even larger than those achieved through acute exercise. Indeed, approximately 3 to 4 months of daily aerobic exercise training inducing 6% to 8% body weight reduction was shown to improve insulin sensitivity by 32% and 60% in middle-aged women and men, respectively.[25,26] In addition, among sedentary and overweight men and women who underwent exercise interventions of varying volume and intensity, significant improvement in insulin sensitivity ranging in magnitude from 40% to 85% were reported.[87] Numerous studies have also shown that improvements in insulin sensitivity through chronic exercise occur independent of weight reduction.[26,39,88,89] For example, 3 months of daily aerobic training in obese men who consumed compensatory kilocalories equivalent to the amount expended during exercise resulted in a 30% improvement in insulin sensitivity despite no change in weight but a significant (12%) reduction in visceral fat.[26]
Effect of Caloric Restriction
A meta-analysis examining the results of 10 intervention studies involving 192 obese subjects with non-insulin-dependent diabetes mellitus reported that short-term (4-6 weeks), very low calorie diets (800 kcal/d) result in a 50% reduction in fasting plasma glucose levels associated with 10% weight loss.[90] For example, reducing caloric intake to 400 kcal/d for 1 month in obese, diabetic men and women resulted in a 9% decrease in body weight and a 51% decrease in fasting plasma glucose levels.[91] Longer term studies were shown to reduce plasma glucose levels more modestly (<30%), and the degree of plasma glucose reduction was linearly related to the amount of weight lost.[90] For example, Heilbronn et al[92] reported that a reduction of caloric intake to 1600 kcal/d for 3 months resulted in a 6 kg weight loss and a 14% decrease in fasting plasma glucose levels. Conversely, it was reported that a 3-month very low calorie diet resulting in a 15 kg reduction in body weight was associated with a 44% decrease in fasting glucose levels.[77] Overall, a 10 kg diet-induced weight reduction over 3 months can be expected to reduce fasting glucose levels by approximately 25%.[90] However, much like that reported for exercise, caloric restriction does not affect fasting glucose levels in subjects with normal fasting glucose values at baseline.[25,26]
Similarly, short-term interventions (7 days) of a very low calorie diet (800 kcal/d) in type 2 diabetics have previously been shown to reduce insulin resistance by 32% in accordance with only a 2.0 kg weight loss.[77] This evidence contrasts with another trial reporting that no change in insulin sensitivity was associated with a similar weight loss (1.7 kg) after 4 days of a 1000 kcal/d caloric restriction in diabetic subjects.[76] However, this latter study did document a significant improvement in insulin sensitivity (30%) when the caloric restriction was carried out over a 1-month period. More consistent are the results of longer term studies (3-4 months) that document 17% to 72% improvements in insulin resistance.[26,48,77] Specifically, a study by Kelley et al that imposed a highly stringent diet of 400 to 600 kcal/day for 3 months in male and female diabetics reported a 72% increase in insulin sensitivity associated with a marked weight reduction (15 kg).[77] On the other hand, Ross et al reported that 3 months of more modest caloric restriction (700 kcal reduction) in obese men was also associated with a drastic improvement in insulin sensitivity (43%) in unison with a 7.4 kg reduction in body weight.[26]
Summary
Finally, the results of the Diabetes Prevention Program have shown that over a period of 3 years, a lifestyle modification program, including a minimum of 150 minutes of exercise per week and a hypocaloric diet (-450 kcal/d), can prevent the deterioration of glucose metabolism and thus reduce the incidence of type 2 diabetes by 58% in at-risk individuals.[93] These results are consistent with others[78] and suggest that reducing plasma glucose levels and improving insulin sensitivity through exercise and diet can prevent the development of diabetes in predisposed patients, possibly to a greater degree than what can be achieved through pharmacological interventions.[93]
Dyslipidemia
The atherogenic lipid profile, as defined in the metabolic syndrome by hypertriglyceridemia and low levels of HDL cholesterol, has been tied to other metabolic risk factors including abdominal obesity[94] and insulin resistance.[95] Accordingly, these lipid abnormalities have been shown to predict cardiovascular-related morbidity and mortality.[96] Numerous reviews[97-101] and metaanalyses[102-104] have investigated the role of exercise and diet on dyslipidemia, the results of which are reported here.
Effect of Exercise
The overwhelming consensus among available studies suggests that exercise consistently improves HDL cholesterol and triglycerides levels.[97-99,102,103] For example, a meta-analysis of 15 randomized, controlled studies revealed that overall, 30 to 60 minutes of aerobic exercise, 3 to 5 times per week, at a moderate intensity resulted in a mean increase in HDL cholesterol levels of approximately 4% (0.05 mmol/L) and a decrease in triglyceride levels of approximately 12% (0.21 mmol/L).[102] These results are in general agreement with those of a prior review that concluded that aerobic exercise which induces an energy expenditure of 1200 to 2200 kcal/ wk may bring about a 4% to 22% (0.05-0.21 mmol/L) increase in HDL cholesterol levels and a 4% to 37% (0.01-0.43 mmol/ L) decrease in triglyceride levels.[98] Other analyses[99,103] have revealed similar effects of exercise on HDL cholesterol and triglyceride levels and in unison suggest that only a modest amount of exercise is required to produce significant improvements; however, a dose-response relationship has yet to be established.[99] Although some suggest that exercise-induced weight loss must be achieved to observe improvements in lipid profile,[97] others have shown that although improvements in HDL cholesterol and triglyceride are generally greater in those who lose weight, these improvements are observed even when weight remains virtually unchanged.[98,102,105] However, these changes could be mediated by improvements in body composition, such as increases in skeletal muscle mass or reductions in visceral fat.[25,26]
Effect of Caloric Restriction
A meta-analysis[104] based on evidence from 64 individual studies revealed that triglyceride levels are reduced by approximately 32% (0.66 mmol/L) in response to various calorie restriction protocols that resulted in a mean 16.6 kg weight loss. Since that time, numerous studies have corroborated these findings, reporting significant decreases in triglyceride levels as a result of caloric restriction.[46,49,106-108] For example, a study in severely obese men and women reported that a 510 kcal/d caloric deficit over 1 year reduced body weight by 5 kg and decreased triglyceride levels by 29% (0.65 mmol/L).[109] A similar reduction in triglyceride levels (0.50 mmol/L or 28%) was seen in a much shorter intervention study (4 months) that prescribed a very low calorie diet (800 kcal/d) to obese men and women.[49] These results suggest that either a moderate reduction in caloric intake of prolonged duration or a severe reduction of short duration will lead to significantly reduced triglyceride levels in obese men and women.
Caloric restriction is also associated with modest increases in HDL cholesterol levels, although the relationship is not as straightforward as that observed for triglyceride levels. Specifically, while fat loss is actively occurring, caloric restriction actually results in a transient decrease in HDL levels.[104,110] However, once body weight has stabilized and a new energy balance has been achieved, HDL levels increase above baseline.[104] In fact, a meta-analysis examining the results of 47 dietary interventions reported a mean HDL decrease of 8% during active weight loss, followed by a 12% increase above baseline once weight had stabilized.[104] Accordingly, most studies that do not include a weight stabilization period after weight loss intervention fail to see significant improvements in HDL levels.[111,112] On the other hand, 3 months of 1200 kcal/d diet in obese, elderly men resulted in a 0.1 mmol/L (9%) increase in HDL levels after 2 weeks of weight stabilization after a mean 10 kg loss in body mass.[50] Similar weight loss and increases in HDL have also been reported in middle-aged obese men.[113] Unfortunately, an HDL increase of this magnitude is only expected to lower the relative risk of mortality by approximately 2%.[114] Less typical are the results of Tchernof et al, who examined the effect of consuming a 1200 kcal/d diet for 3 months in obese, postmenopausal women.[115] This caloric restriction resulted in a 14.5 kg decrease in body weight and a dramatic 0.5 mmol/ L (57%) increase in plasma HDL levels. It is important to note, however, that HDL measures after this intervention were completed subsequent to a 3-month weight stabilization (±2 kg) period. Thus, although available data suggest that modest increases in HDL levels are seen after caloric restriction, the timing of measurement and the inclusion of a weight stabilization period after the intervention may significantly affect the magnitude of improvement.
Elevated Blood Pressure
Elevated blood pressure leads to stroke, coronary heart disease, and renal disease, and its reduction is associated with a significant drop in the risk of cardiovascular-related morbidity and mortality.[116] Furthermore, some suggest that the magnitude of blood pressure reduction need not be large to see significant decrements in associated health risk.[117] The effect of regular exercise[118-124] or caloric restriction[125-127] on improvements in blood pressure has been the subject of many reviews, and the available evidence is summarized here.
Effect of Exercise
Inactivity is a major risk factor for high blood pressure, and sedentary individuals have up to a 50% greater chance of hypertension as compared to more active counterparts.[128] In addition, the evidence of the blood pressure-lowering effects of regular aerobic exercise is quite consistent across reports.[118-124] In fact, exercise has been shown to reduce both systolic and diastolic blood pressure in lean,[118,123] obese,[118,123] hypertensive,[118,121,122] as well as normotensive[118,119,121] subjects.
That regular aerobic exercise leads to reductions in blood pressure is established;[118-123] however, the magnitude of change generally reported depends on a number of factors. A large meta-analysis, consisting of 54 randomized, controlled trials, found that as averaged across available studies, aerobic exercise reduces systolic and diastolic blood pressure by approximately 4 and 3 mmHg, respectively.[118] These results are in general agreement with numerous other reports[119,120,123,124] and suggest modest reductions in blood pressure through exercise. Importantly, the blood pressure response to exercise intervention is highly dependent on baseline blood pressure values.[124] For example, the reductions in systolic and diastolic blood pressure in response to exercise reported among hypertensive subjects tend to be greater (-7 and -6 mmHg, respectively) than those reported in normotensive subjects (-3 and -2 mmHg, respectively).[124] Also, the duration of exercise training affects the degree of improvement in blood pressure. For example, although significant reductions in blood pressure are already reported during the 24 hours after a single exercise bout, greater improvements are seen with chronic exercise training.[122] In addition, some have suggested that females may derive a greater antihypertensive effect from exercise as compared to males.[122,129] Also, the reduction in blood pressure due to an exercise intervention may be greater for Asian than for Caucasian subjects.[118,122] Although modest weight loss (3%-9%) has been shown to lead to significant reductions in blood pressure,[126] others have reported significant blood pressure improvements through exercise independent of changes in weight.[121,123,124] With regard to ideal exercise training parameters, most reports have suggested that low to moderate intensity is ideal for inducing reductions in blood pressure,[118,122,124] but other exercise program parameters (duration, frequency) have generally not had any impact on the level of improvement.[30,118,123] Overall, aerobic exercise training of low to moderate intensity, 30 to 60 min/d on 3 to 5 d/wk, is recommended for optimal blood pressure regulation.[124]
Effect of Caloric Restriction
Much like exercise, caloric restriction has been shown to modestly decrease blood pressure in men and women. Indeed, a meta-analysis of 25 randomized controlled trials involving 4874 subjects has shown that caloric restriction that induces a mean weight loss of 6.7 kg is associated with a 5 and 4 mmHg reduction in systolic and diastolic blood pressure, respectively.[125] Specifically, it appears that those caloric restriction interventions that induce the greatest weight loss report the greatest reductions in blood pressure. For example, studies that induce a modest reduction in body weight (<4.0 kg reduction) through a small caloric deficit, on average, observe minor or nonsignificant reductions in blood pressure (-1.4 to -2.0 mmHg and +1.3 to -1.9 mmHg changes in systolic and diastolic blood pressure, respectively).[130,131] Conversely, those caloric restriction interventions that induce marked decrements in body mass (>10.0 kg reduction) by imposing highly restrictive diets report significantly greater improvements in blood pressure (-12 to -16 mmHg and -9 to -12 mmHg reductions in systolic and diastolic blood pressure, respectively).[132,133]
Summary
We conclude that although exercise- and diet-induced reductions in blood pressure are modest, these improvements are generally equivalent with those achieved through treatment with antihypertensive medication (6 and 5 mmHg reductions in systolic and diastolic blood pressure, respectively).[127] Hence, although exercise and caloric restriction will reduce blood pressure to the same extent as pharmacotherapy, rarely is this decrease of sufficient magnitude to bring about normal blood pressure.[129] Nevertheless, significant decrements in health risk are expected even with marginal reductions in blood pressure.[117]
Limitations
When available, evidence of the effect of gender, race, and age on changes in the metabolic syndrome components in response to exercise and diet is presented. However, the preponderance of available evidence is derived from samples of middle-aged Caucasians, and thus much remains unknown regarding the influence of age and race on these observations. Nevertheless, limited evidence suggests that lifestyle interventions on metabolic syndrome appear to be effective irrespective of gender and race.[86]
Conclusion
The weighted evidence, as summarized in Table 1 , suggests that an increase in physical activity levels and/or a decrease in caloric intake is associated with improvement in abdominal obesity (waist circumference and visceral fat), glucose metabolism (fasting glucose and insulin sensitivity), dyslipidemia (HDL cholesterol and triglycerides), and blood pressure. The magnitude of improvement in these variables, however, is dependent on baseline values, with greater improvements among those with the greatest disturbances in metabolic status being generally reported. Thus, these results support prior recommendations[10,19] and suggest that lifestyle modification, specifically moderate-intensity exercise for 30 to 60 minutes on most days of the week and/or a moderate reduction in caloric intake (approximately 500 kcal), will result in significant improvements in the major components of the metabolic syndrome. Whether gender, age, and race influence the improvements seen in the components of the metabolic syndrome with lifestyle modification remains unclear. Although the efficacy of exercise and/or caloric restriction in the treatment of the metabolic syndrome is evident, promotion of such lifestyle changes in today's environment remains a challenge.
Top
From Medscape Medical News
Eating Fish May Reduce the Risk for Subclinical Brain Abnormalities
Allison Gandey
August 7, 2008 — Dietary intake of tuna and other fish appear to lower the prevalence of subclinical infarcts and white-matter abnormalities, report researchers.
In the August 5 issue of Neurology, investigators show that a modest intake of fish among older adults was associated with fewer brain abnormalities on magnetic resonance imaging (MRI).
"One of the differences in this study is that we looked at various types of fish," second author David Siscovick, MD, from the University of Washington, in Seattle, said during an interview. "We also found that broiled and baked fish appeared to be beneficial, while fried fish was not."
The findings add to prior evidence suggesting fish with higher eicosapentaenoic and docosahexaenoic acid content appear to have clinically important health benefits. The American Heart Association advises that people eat fish at least 2 times a week. The recommendation promotes fatty fish such as mackerel, herring, tuna, and salmon — all high in omega-3 fatty acids.
"Our data are consistent with this recommendation," Dr. Siscovick told Medscape Neurology & Neurosurgery. He urges clinicians to discuss with patients the type of fish and how it is prepared. "Telling people to eat more fish without counseling them on these details may not have the same impact," he advised.
In the current study, investigators led by Jyrki Virtanen, PhD, from the University of Kuopio, in Finland, looked at 3660 participants aged 65 years and older. Patients were part of the population-based Cardiovascular Health Study, and all participants underwent MRI at baseline. Five years later, just over 2300 had a second scan.
Neuroradiologists assessed MRIs in a standardized and blinded manner. The researchers used food frequency questionnaires to assess diet, and participants with known cerebrovascular disease were excluded from the analysis.
Consuming Fish 3 or More Times a Week Beneficial
After adjusting for multiple risk factors, the researchers found that the risk of having 1 or more prevalent subclinical infarcts was lower among individuals who consumed tuna and other fish 3 or more times a week compared with those who ate fish less than once a month (relative risk, 0.74; 95% CI, 0.54 – 1.01; P = .06; P for trend = .03).
They found that fish consumption was also associated with trends toward lower incidence of subclinical infarcts. And fish was linked to better white-matter grade, but not with sulcal and ventricular grades — markers of brain atrophy. Investigators observed no significant associations between fried-fish consumption and any subclinical brain abnormalities.
The researchers point to several strengths of their study, such as the population-based recruitment, the large numbers of participants enrolled, and the extensive standardized examinations of other risk factors. They also prospectively collected data on dietary intake and MRI findings.
But the study also had several limitations. They note that although interreader reliabilities of white-matter and ventricular grades are good, estimates of sulcal grade tend to have greater interreader variability. The researchers also point out that the observed associations could be related to other differences related to fish consumption, such as a healthier lifestyle in general. "However," they write, "we adjusted for a variety of other risk factors and lifestyle habits."
The investigators recommend that randomized trials of fish or fish-oil intake be conducted to assess the potential to reduce subclinical ischemic events. Such studies, they suggest, would be feasible and important given the high incidence of such events in older adults.
The researchers have disclosed no relevant financial relationships.
Neurology. 2008;71:439-446. Abstract
Top
From Journal of the American Board of Family Medicine
Hypertriglyceridemia
Rade N. Pejic, MD; Daniel T. Lee
Posted: 06/01/2006; J Am Board Fam Med. 2006;19(3):310-316. © 2006 American Board of Family Medicine
Abstract and Introduction
Abstract
Hypertriglyceridemia is a commonly encountered lipid abnormality frequently associated with other lipid and metabolic derangements. The National Cholesterol Education Program recommends obtaining a fasting lipid panel in adults over the age of 20. The discovery of hypertriglyceridemia should prompt an investigation for secondary causes such as high fat diet, excessive alcohol intake, certain medications, and medical conditions (eg, diabetes mellitus, hypothyroidism). In addition, patients should be evaluated for other components of the metabolic syndrome. These include abdominal obesity, insulin resistance, low high-density lipoprotein (HDL), high triglyceride, and hypertension. Hypertriglyceridemia is classified as primary hypertriglyceridemia when there are no secondary causes identified. Primary hypertriglyceridemia is the result of various genetic defects leading to disordered triglyceride metabolism. It is important to treat hypertriglyceridemia to prevent pancreatitis by reducing triglyceride levels to <500 mg/dL. Furthermore, lowering triglycerides while treating other dyslipidemias and components of the metabolic syndrome will reduce coronary events. However, it is controversial how much isolated hypertriglyceridemia correlates directly with coronary artery disease and further studies are needed to clarify whether treatment for this condition leads to meaningful clinical outcomes. Therapeutic lifestyle changes (TLC) are the first line of treatment for hypertriglyceridemia. These changes include a low saturated fat, carbohydrate-controlled diet, combined with alcohol reduction, smoking cessation, and regular aerobic exercise. High doses of omega-3 fatty acids from fish and fish oil supplements will lower triglyceride levels significantly. When patients do not reach their goals by TLC, drug therapy should be started. In cases of isolated hypertriglyceridemia, fibrates are initially considered. When elevated low-density lipoprotein levels accompany hypertriglyceridemia, 3-hydroxy-3-methylglutaryl coenzyme A reductase inhibitors are preferred. In patients with low HDL levels and hypertriglyceridemia, extended release niacin can be considered. A combination of the medicines may be necessary in recalcitrant cases.
Introduction
Hypertriglyceridemia is defined as an abnormal concentration of triglyceride in the blood. According to the National Cholesterol Education Program Adult Treatment Panel (NCEP ATP III) guidelines, a normal triglyceride level is <150 mg/dL ( Table 1 ).[1] In the United States, the prevalence of hypertriglyceridemia defined as a triglyceride level >150 mg/dL is ~30%.[2,3] Hypertriglyceridemia may be primary or secondary in nature. Primary hypertriglyceridemia is the result of various genetic defects leading to disordered triglyceride metabolism. Secondary causes are acquired causes, such as, high fat diet, obesity, diabetes, hypothyroidism, and certain medications.
Hypertriglyceridemia is a risk factor for pancreatitis and it accounts for 1 to 4% of cases of acute pancreatitis. Although a few patients can develop pancreatitis with triglyceride levels >500 mg/dL, the risk for pancreatitis does not become clinically significant until levels are >1000 mg/dL.[1,4,5] More importantly however, hypertriglyceridemia is typically not an isolated abnormality. It is frequently associated with other lipid abnormalities and the metabolic syndrome (abdominal obesity, insulin resistance, low high-density lipoprotein (HDL), high triglyceride, and hypertension), which are linked to coronary artery disease.[3]
Considering the current obesity epidemic, there will be a significant rise in the incidence of the metabolic syndrome. Thus, primary care physicians will encounter hypertriglyceridemia more frequently and should be familiar with the evaluation and management of this common disorder.
Pathophysiology
Dietary triglycerides are absorbed by the small intestine, secreted into the lymph system, and enter the systemic circulation as chylomicrons via the thoracic duct. Muscle and adipose tissue remove some of the triglyceride from the chylomicron and the chylomicron remnant is taken up by the liver and metabolized into a cholesterol rich lipoprotein. Although most of the triglyceride found in blood is absorbed from the small intestine, the liver produces and secretes a small amount of triglyceride. Apolipoproteins are proteins associated with lipids that assist with their assembly, transport, and metabolism. Defects in any of these structural proteins or the enzymes they interact with may result in a clinical dyslipidemia.
The Fredrickson classification scheme organizes these various primary dyslipidemias into a several categories ( Table 2 ).[6] High triglycerides are a component of each of these dyslipidemias except Fredrickson type IIa (familial hypercholesterolemia). In the United States, the 2 most common dyslipidemias are Fredrickson type IIb (familial combined hyperlipidemia) and type IV (familial hypertriglyceridemia). Together, these 2 dyslipidemias account for 85% of familial dyslipidemias.
In contrast to primary hypertriglyceridemia, there are many secondary causes of hypertriglyceridemia. These include medical conditions such as diabetes mellitus, hypothyroidism, obesity, and nephrotic syndrome. In addition, certain medications ( Table 3 ), high carbohydrate diets, and alcohol can cause or exacerbate hypertriglyceridemia. Commonly, hypertriglyceridemia results from a combination of factors. For example, a patient may be found to have familial combined dyslipidemia, obesity, and high alcohol consumption.
Clinical Presentation
The majority of the time, hypertriglyceridemia is discovered after performing a routine lipid profile. However, severe hypertriglyceridemia (>500 mg/dL) may cause pancreatitis, eruptive xanthomas, or lipemia retinalis. In some cases, extremely high levels of chylomicrons can cause chylomiconemia syndrome, which is characterized by recurrent abdominal pain, nausea, vomiting, and pancreatitis. Triglycerides are typically >2000 mg/dL in this condition. Eruptive xanthomas are 1- to 3-mm yellow papules that can erupt anywhere but are usually seen on the back, chest, and proximal extremities. Palmar xanthomas, yellow creases on the palm, may be seen in patients with type III hyperlipidemia. Lipemia retinalis is the visualization of lipemic blood in the retinal blood vessels.
Diagnostic Evaluation
The NCEP recommends obtaining a fasting lipid panel [total cholesterol, low-density lipoprotein (LDL), HDL, and triglycerides] on patients beginning at age 20 and repeated; every 5 years [strength of recommendation (SOR)-C] ( Table 4 and Table 5 ).[1] In healthy asymptomatic patients without risk factors, it is acceptable to obtain a nonfasting total cholesterol and HDL cholesterol level every 5 years. However, for patients with coronary heart disease (CHD), CHD risk equivalents, familial dyslipidemia, or risk factors for CHD, a fasting lipid panel should be obtained yearly. If the triglyceride level is discovered to be >150 mg/dL, it should be rechecked again after a 12- to 16-hour fast for confirmation. If the triglyceride level is >1000 mg/dL, beta-quantification by ultra centrifugation and electrophoresis can be performed to determine the exact dyslipidemia.
The 2 most common dyslipidemias are familial combined hyperlipidemia (type IIb) and familial hypertriglyceridemia (type IV). In type IIb, the total cholesterol, low-density lipoprotein (LDL) cholesterol, and triglyceride levels are all elevated. In type IV, the total cholesterol and LDL levels are typically normal but the triglyceride level is elevated usually between 500 and 1000 mg/dL. Patients with type IV disease are very sensitive to dietary modifications.
The finding of hypertriglyceridemia should prompt an investigation for other components of the metabolic syndrome [SOR-C].[3] In particular, patients should be evaluated for fasting hyperglycemia, hypertension, abdominal obesity, and low HDL levels. Thyrotropin level, serum urea nitrogen, creatinine, and urinalysis should be obtained to assess thyroid and renal function ( Table 6 ). Baseline liver function should also be assessed before starting medication. If there is a clinical suspicion of pancreatitis, amylase and lipase levels should be measured. A fasting insulin level can be measured to look for direct evidence of insulin resistance. A fasting insulin level above 15 µU/mL is abnormal. However, a fasting glucose to fasting insulin ratio provides a more sensitive and specific assessment of insulin resistance.[7] A normal glucose-insulin ratio is >4.5. Ratios <4.5 suggest insulin resistance.
Treatment
A major reason to treat hypertriglyceridemia is to prevent pancreatitis. The triglyceride level should be reduced to <500 mg/dL to prevent this serious disease [SOR-B].[1,4,5] The relationship between triglycerides and cardiovascular disease is less clear. There have been multiple conflicting studies regarding the role of triglycerides and the development of CHD.[8-12] Hypertriglyceridemia is clearly associated with CHD in univariate analysis. However, many multivariate studies have shown that its risk is markedly attenuated after adjustment for other strong CHD risk factors, namely, low HDL levels and increased small, dense LDL particles. These findings have led some researchers to believe that hypertriglyceridemia serves more as a proxy for abnormal cholesterol levels and cholesterol sub-fractions of which hypertriglyceridemia is frequently associated.[13] Most interventions aimed at lowering the triglyceride level also raise the HDL level, which is well known for reducing coronary events [SOR-B].[9,10] A recent review of the literature concluded that treating isolated hypertriglyceridemia does not prevent coronary events.[14] However, a thorough search for other components of the metabolic syndrome is recommended.
On the other hand, there have been many other studies that have shown hypertriglyceridemia to be an independent risk factor for CHD even after adjustment for HDL and LDL.[9,15-17] Furthermore, the NCEP considers hypertriglyceridemia to be an independent risk factor for CHD and calls for medical treatment in cases where therapeutic lifestyle changes (TLC) are not adequate to reduce the triglycerides to appropriate levels.[1] Although the extent to which hypertriglyceridemia causes CHD is controversial at present, the authors feel that because most cases of hypertriglyceridemia are associated with abnormal cholesterol sub-fractions and are frequently found in patients with CHD risk factors, treatment of hypertriglyceridemia is often warranted in conjunction with the necessary treatment of the other lipid derangements. In cases where hypertriglyceridemia is found to be the only lipid abnormality, treatment is still important to prevent pancreatitis when triglycerides are markedly elevated.
The treatment of hypertriglyceridemia begins with TLC. Specifically, a low fat, carbohydrate-controlled diet should be adopted. Saturated fat should not make up more than 7% of total daily calories, carbohydrates should be restricted to 50% to 60% of daily calories, and simple sugars like sucrose should be avoided.[1] Patients may also consider increasing intake of oily fish (eg, salmon, mackerel, herring) to at least 2 servings per week.[18] Alcohol should be greatly reduced or stopped altogether, along with smoking cessation if indicated. Discontinuation of any offending medications should be considered as well. Titration upwards to a goal of at least 30 minutes of aerobic exercise 5 days a week is greatly beneficial. If present, diabetes and hypothyroidism should be treated accordingly. These measures often have a dramatic effect on triglyceride levels and can lower it hundreds of points.[1]
If TLC and control of secondary medical conditions are not adequate to lower the triglyceride level to <200 mg/dL, then medical therapy is warranted (Figure 1). When triglyceride levels range between 200 mg/dL and 500 mg/dL, treatment should be directed primarily toward normalizing the LDL cholesterol.[1] Once the LDL is at goal, a secondary endpoint is the non-HDL cholesterol (total cholesterol-HDL). Non-HDL goals are 30 mg/dL higher than LDL goals. A tertiary treatment goal, particularly in the setting of CHD or CHD risk equivalents, is to raise the HDL to >40 mg/dL. The benefit of medically treating triglyceride levels between 200 mg/dL and 500 mg/dL when the other lipid sub-fractions are normal is less clear and medical management in these patients should be individualized ( Table 7 ).[14]
Figure 1.
Treatment Algorithm.
In cases of isolated hypertriglyceridemia, fibrates, such as gemfibrozil and fenofibrate, may be used because they are potent reducers of triglycerides. Furthermore, fish oil supplementation may be added to augment the fibrate treatment, and in some cases, the patient may elect to try fish oil supplementation as first line. Fish oils have a dose-dependent effect and many patients will need 2 g to 4 g a day of fish oil supplementation to achieve goals. Omega-3 fatty acids (4 g per day) will reduce triglyceride levels by 30%.[19,20] However, at this dosage, omega-3 will elevate the LDL by 5% to 10% and will have little effect on the HDL.[20] Fish oil capsules can be taken at any time of the day, with or without food, together or in divided doses. However, as the capsules dissolve in the stomach and release the oil, many people experience a "fishy burp." Taking the capsule at bedtime, freezing them, taking enteric-coated capsules, or taking them with food may minimize or eliminate this problem.[21]
In many circumstances, the LDL is elevated in addition to the triglycerides. In these cases, the 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors (statins) should be used to lower the LDL to the patient's goal based on NCEP ATP III guidelines.[1] Niacin or fibrates may be added if the LDL and/or the triglycerides remain too high. However, physicians should add fibrates with great caution as combining them with the HMG-CoA reductase inhibitors increases the risks of severe myopathy and hepatoxicity. This combination particularly should be avoided in the elderly, in patients with acute or serious chronic illnesses (especially chronic renal disease), in those undergoing surgery, and in patients receiving multiple medications.[22]
For other mixed dyslipidemias involving high triglycerides and low HDL, niacin may be considered. There are 3 available preparations of niacin: immediate acting, long acting, and extended release. Immediate acting niacin must be taken 3 times daily and is associated with flushing, hyperglycemia, and gastrointestinal side effects. The long-acting preparation can be taken once daily and has less flushing. However, with its absorption time lasting generally greater than 12 hours, it carries a higher risk of hepatotoxicity and therefore is not recommended. The best preparation of niacin to be prescribed is extended release (ER) niacin.[23] ER niacin has a lower rate of flushing and no additional risk of hepatotoxicity as is found with the long-acting preparations. Furthermore, ER niacin can be dosed once daily resulting in better adherence because it is typically absorbed over 8 to 12 hours. ER niacin has been shown to lower the triglyceride level by ~25% and raise the HDL level by almost 30%.[24] To prevent flushing, a low starting dose of niacin should be taken immediately after the evening meal and increased at monthly intervals. In addition, aspirin (325 mg) may be taken 30 to 60 minutes before any form of niacin to further reduce the incidence of flushing. Niacin should be used with caution in patients with diabetes (including glucose intolerance) and gout as it may increase blood sugar and uric acid levels, respectively. Niacin is contraindicated in patients with active peptic ulcer disease.
Finally, it is important to note that patients with severe hypertriglyceridemia (over 1000 mg/dL) often need a combination of medicines to achieve their goal (Figure 1). In addition, they will benefit from strict adherence to TLC including a very low fat diet and complete abstinence from alcohol. If patients do not reach their goals with the above treatment regimens, a referral to a lipid specialist and medical dietician may be warranted. In addition, keep in mind that some physicians may be tempted to add bile acid binding resins to help treat elevated total cholesterol and LDL. However, these medications can worsen triglyceride levels and should not typically be used in patients with significantly elevated triglycerides.[25,26]
Follow Up
If medical treatment is started, patients may have their lipids tested as soon as 1 to 2 months after initiating treatment because the full effect of the medicines are seen within this time interval. Furthermore, this allows for titration of the medicines to appropriate levels in an expedient manner.[26] This testing interval may continue until the therapeutic goal is reached. Once stable, the interval between testing can be extended to every 6 months. It is reasonable to monitor liver enzymes concurrently with every lipid draw while on lipid medications. Patients taking niacin should also have their blood sugar and uric acid levels checked routinely as indicated.
While treating patients for hypertriglyceridemia, patients should be monitored for development of the metabolic syndrome. Specifically, patients should have their blood pressure, fasting blood sugar, and weight measured at regular intervals. Clinicians should encourage patients to adhere to their diet and exercise program. Referral to a dietician is recommended for formal medical nutrition therapy if patients fail informal dietary counseling. Finally, every attempt should be made to help patients stop smoking cigarettes and reduce their alcohol intake.
Top
From Current Opinion in Rheumatology
Recent Developments in Diet and Gout
Susan J Lee; Robert A Terkeltaub; Arthur Kavanaugh
Posted: 03/15/2006; Curr Opin Rheumatol. 2006;18(2):193-198. © 2006 Lippincott Williams & Wilkins
Abstract and Introduction
Abstract
Purpose of Review: Gout is the most common inflammatory arthritis in men, affecting approximately 1-2% of adult men in Western countries. United States gout prevalence has approximately doubled over the past two decades. In recent years, key prospective epidemiological and open-labeled dietary studies, coupled with recent advances in molecular biology elucidating proximal tubular urate transport, have provided novel insights into roles of diet and alcohol in hyperuricemia and gout. This review focuses on recent developments and their implications for clinical practice, including how we advise patients on appropriate diets and alcoholic beverage consumption.
Recent Findings: Studies have observed an increased risk of gout among those who consumed the highest quintile of meat, seafood and alcohol. Although limited by confounding variables, low-fat dairy products, ascorbic acid and wine consumption appeared to be protective for the development of gout.
Summary: The most effective forms of dietary regimen for both hyperuricemia and gout flares remains to be unidentified. Until confirmed by a large, controlled study, it is prudent to advise patients to consume meat, seafood and alcoholic beverages in moderation, with special attention to food portion size and content of non-complex carbohydrates which are essential for weight loss and improved insulin sensitivity.
Introduction
Gout has been recognized for centuries and is currently the most common inflammatory arthritis in men, affecting approximately 1-2% of adult men in Western countries.[1,2] Classically, gout presents as a recurrent, acute, monoarticular or oligoarticular arthritis. In some cases, it can progress to a chronic polyarticular arthritis associated with bony deformities. Gout is a multifactorial disease characterized by hyperuricemia and monosodium urate monohydrate crystal deposition in the joint and uric acid calculi in the urinary tract.
The associations of gout with obesity and overindulgence in alcohol and certain foods are classical observations. Traditionally, many patients with gout had been advised to restrict their alcohol use and ingestion of all purine-rich foods, including vegetables. Several key prospective epidemiological studies by Choi et al.[3*-7*] and an open-labeled dietary study by Dessein et al.,[8] coupled with recent advances in molecular biology elucidating the cellular mechanism for urate transport, however, have shed new insights into the role of diet on the development of hyperuricemia and gout. Here, we focus on some of these key studies and their implications for clinical practice, how we specifically advise patients and future research directions.
The Impact of Obesity on Gout
Gout prevalence has approximately doubled over the last two decades, now affecting over 5 000 000 Americans, according to the National Health and Nutrition Examination Survey III (NHANES III). Men are affected more frequently than women, with a prevalence ranging from 6.6 to 44.1 per 1000 in men and 3.9 to 18.2 per 1000 in women.[1,2,9-11] Age-adjusted annual incidence for gout also has increased dramatically over the past two decades (from 45 to 62.3 per 100 000).[12] Major factors thought to play a role in the rising prevalence of gout include increases in longevity, use of diuretics and low-dose aspirin, obesity, end-stage renal disease, hypertension and metabolic syndrome.[13,14,15**]
Obesity, defined as a body mass index (BMI) of more than 30 kg/m2, is an enormous public health problem. Data from NHANES have shown a rise in the age-adjusted prevalence of obesity from 22.9% during 1988-1994 to 30.5% during 1999-2000.[16] Current dietary trends, with higher consumption of meat, seafood and fat, in combination with inactivity have contributed to this rising prevalence of obesity. Obesity has been implicated as the second leading preventable cause of death in the United States, with an estimated 280 000 excess annual attributable deaths.[17] Furthermore, obesity is linked to development of insulin resistance (metabolic syndrome), which is complicated by hypertension, hyperlipidemia, coronary artery disease and hyperuricemia. Metabolic syndrome has also been associated with an increased risk of gout.[15**,18,19]
Several epidemiological studies have observed an increased risk of gout in patients with obesity. The Boston Veterans Administration Normative Aging Study[20] prospectively followed 2280 healthy men, aged 21-81 at entry in 1963, and evaluated the incidence of gout and its associated risk factor. Although serum urate level was the most important predictive factor, a proportional hazards regression analysis showed that BMI also was a significant independent predictor for the development of gout. Similarly, data from the Johns Hopkins Precursors Study[21] on 1216 men and 121 women with 40 000 person-years of follow-up noted a strong dose-response effect of BMI on the development of gout. The cumulative prevalence rose from 3.2 per 100 with a BMI of less than 22 kg/m2 to 14.8 per 100 among men with a BMI of more than 25 kg/m2. In addition to the absolute BMI, the relative increase in BMI over time was also associated with an increased risk for gout, with the cumulative prevalence rising to 14.5 per 100 for men who gained more than 1.88 BMI units.
Most recently, the prevalence of gout in relation to BMI was assessed using the data from the Health Professionals Follow-up Study.[3*] This is a large, ongoing longitudinal cohort of 51 529 predominantly Caucasian male health professionals, aged 40-75 at entry in 1986. During the 12-year follow-up, there were 730 newly diagnosed cases of gout. A clear dose-response relationship was noted between BMI and the risk for gout, with the age-adjusted relative risk (RR) increasing from 1.4 to 3.26 for BMIs of 21-23 and 30-35 kg/m2, respectively. Compared with those with stable weight over time, men who gained more than 30 lbs since age 21 had a higher RR of 2.47 for gout after adjusting for age and weight at age 21 years. In contrast, a loss of more than 10 lbs since the study entry was associated with a 30% reduction in the risk of gout (RR 0.61). Similarly, the Nurses Health Study[22] of 92 224 women with no history of baseline gout found a similar dose-response relationship between BMI and the risk of gout, with RRs of 6.13 and 10.59 for BMIs of 30-35 and over 35 kg/m2, respectively. These observations have led to further support of weight loss to prevent recurrent gout attacks. The most effective dietary modification for patients with gout, however, remains controversial, as discussed below.
The Impact of Diet, Including Dairy Products and Ascorbate, on the Serum Uric Acid
The serum urate level depends on the balance between dietary intake, endogenous synthesis and net uric acid excretion. The annual incidence of gout directly correlates with serum urate level.[3*,4*,20-23] As such, patients with gout have typically been advised to avoid foods rich in purines, such as meat and seafood. Until recent studies by Choi and his co-workers, however, the relationship between the intake of purine-rich foods, the level of serum uric acid and the incidence of gout had not been studied prospectively.
Recently, the specific relationship between the intake of purine-rich foods, protein and dairy products, and the level of serum uric acid was evaluated using NHANES III data. Data analyzed were from a prospective cohort[4*] of 14 809 participants (6932 men and 7877 women) selected from 1988 to 1994 to simulate a representative sample of the non-institutionalized civilian population. The mean serum uric acid was 5.32 mg/dl and 18% had hyperuricemia. Serum uric acid increased significantly as a function of meat and fish intake, with the multivariate odds ratios (ORs) of 1.37 (95% CI 1.05-1.80) and 1.58 (95% CI 1.07-2.34), respectively. Total protein intake, however, was not associated with an increase in serum uric acid. In fact, high-protein diets have been associated with an increased urinary excretion of uric acid and may actually lower serum uric acid.[24] Therefore, patients should be cautioned against using the protein content of foods as a surrogate marker of purine content.
Several studies have suggested a protective effect of low-fat dairy product consumption on serum uric acid levels. Consistent with prior studies, a significant inverse association was also noted between the intake of dairy and serum uric acid level in the NHANES III (OR 0.66; 95% CI 0.48-0.89).[23] The dairy proteins casein and lactalbumin were thought to lower serum uric acid level by inducing urinary excretion of uric acid.[25-27] Such direct uricosuric effects of the proteins in dairy products are relatively weak, as illustrated in a study[27] of nuns after menopause.
Previous studies[28,29] have suggested a significant uricosuric effect of vitamin C. Recently, the effect of vitamin C on serum uric acid level was evaluated in a double-blind placebo-controlled study[30*] of 184 participants who received either placebo or 500 mg per day of vitamin C for 2 months. Both groups had similar intake of protein, purine-rich foods and dairy products at baseline. The serum uric acid level, however, was lowered only in the vitamin C group. Among those who had hyperuricemia at baseline (uric acid greater than 7 mg/dl), vitamin C supplementation resulted in a mean uric acid reduction of 1.5 mg/dl (P = 0.0008, adjusted for age, sex and baseline serum uric acid and ascorbic acid level). It has been postulated that vitamin C may decrease serum uric acid by both increasing renal secretion and decreasing renal re-absorption of uric acid through competitive binding activities. Despite this potential benefit of vitamin C supplementation, its role in the prevention and management of gout has not been established.
The Impact of Diet Including Dairy Products on Gout
Heavy consumption of purine-rich foods ('feasting'), particularly with concurrent alcohol intake, has long been associated with the development of flares of acute gout. The relationship between the consumption of purine-rich foods and the risk of developing gout was recently evaluated in a large prospective cohort[31] of more than 47 000 male health professionals aged 40 and older without gout at baseline (the Health Professionals Follow-Up Study). During the 12-year follow-up,[32] 730 new cases of gout were identified, with the peak at between 55 and 69 years of age. Validated semi-quantitative food-frequency questionnaires were used to obtain dietary information every 2 years. Men with the highest quintiles of meat and seafood intake were noted to have an increased risk of gout compared with those in the lowest quintile, with ORs of 1.41 (95% CI 1.07-1.86) and 1.51 (95% CI 1.17-1.95), respectively. For those with highest seafood intake, this observed risk was heightened among those who were less overweight (BMI of less than 25 kg/m2). This seemingly paradoxical observation may be related to a difference in purine metabolism, though this remains purely speculative at this time.
Neither total protein intake nor consumption of purine-rich vegetables was associated with an increased risk of gout. Indeed, men with the highest quintile of vegetable protein had lower risk of gout compared with those with the lowest quintile (OR 0.73). Similarly, dairy intake was inversely correlated with the risk of gout, with OR of 0.56. This protective effect was only evident with low-fat dairy products, such as skim milk and low-fat yogurt. It remains possible that small changes in uricosuria induced directly by dairy products over periods of many years can reduce the risk of developing gout. Subjects with high use of low-fat dairy products, however, may represent a distinct population subgroup, as those consuming low-fat dairy may be more attuned to health issues in general. Observed associations with incident gout in the Health Professionals Follow-Up Study appeared to be independent of other individual risk factors, such as age, underlying medical conditions (e.g. hypertension, renal failure), alcohol use, the use of diuretics and BMI, with the noted exception of seafood intake.[5*,33] Ascertainment bias, however, may well have entered into calculations of both serum uric acid level in NHANES III and the risk of developing gout in heavy consumers of dairy products in the Health Professionals Follow-Up Study.[4*,5*]
In the Health Professionals Follow-Up Study, the authors evaluated the robustness of the results by using various definitions of gout. These associations tended to become more prominent as more specific definitions of gout were used. As the study was restricted to middle-aged men, the results cannot be generalized to the overall population without further study. A prospective work on females from the Nurses Health Study[34] of 92 224 women, however, noted a similar protective effect of dairy product consumption - especially low-fat dairy product - on the incidence of gout (OR 0.82). Both of these studies would have been further strengthened if the direct impact and interaction of insulin resistance and metabolic syndrome on the risk of gout were assessed. Nonetheless, these were the first two large prospective studies of their kind and they have added significantly to our current understanding of the impact of diet on the incidence of gout.
The Impact of Alcohol on Hyperuricemia and Gout
For centuries, alcohol has long been associated with hyperuricemia and gout. In 1876, Alfred Garrod wrote that 'the use of fermented liquors is the most powerful of all the predisposing causes of gout, nay so powerful that it may be a question whether gout would ever have been known to mankind had such beverages not be indulged in'.[35] Increased uric acid production and decreased uric acid excretion have both been implicated in the pathogenesis of alcohol-induced hyperuricemia. Specifically, alcohol metabolism produces net adenosine triphosphate degradation to adenosine monophosphate, which is subsequently converted to uric acid. In addition, lactate generated via alcohol consumption increases proximal tubular urate re-absorption while interfering with urinary urate secretion.[36,37] Chronic heavy use of alcohol also has the potential to inhibit conversion of the pro-drug allopurinol to its active metabolite oxypurinol.[38]
Recent advances in molecular biology have defined the cellular mechanisms behind the long-recognized capacity of ketosis to markedly raise serum uric acid level. Urate transporter-1 (URAT1) is an electroneutral transporter that is centrally involved in urate re-absorption at the proximal tubule lumen membrane. Alcohol ingestion directly induces temporary lactate generation and also potentially indirectly triggers ketoacidosis through the fasting often associated with heavy alcohol ingestion. Ketoacids not only compete with urate for secretion but also activate proximal tubular urate re-absorption by activating the organic anion exchange function of URAT1[15**,36,37,39,40] (Fig. 1). URAT1 is located on the apical plasma membrane of proximal tubular cells in human kidneys and is the central factor in reabsorbing tubular uric acid (as urate anion) from the lumen in exchange for intracellular organic anions. The urate re-absorption transport process via URAT1 is triggered by high loads of lactate and several other organic anions. Hence, intense alcohol use, dietary ketosis and prolonged anaerobic muscular activity are among the activities that promote renal urate re-absorption.
Click to zoom
(Enlarge Image)
Figure 1.
Central role of URAT1 urate-organic anion exchange in renal proximal tubule urate re-absorption
The hyperuricemic effect of alcohol has since been observed in many studies. Most recently, NHANES III[6*] was used to assess the impact of various alcoholic beverages on the serum uric acid level. Consistent with previous studies, the serum uric acid level increased with increasing total alcohol intake. This association was only notable with beer and liquor consumption where those with the highest intake (at least one serving per day) had serum uric acid levels of 0.99 mg/dl (95% CI 0.82-1.17) and 0.58 mg/dl (95% CI 0.36-0.80) higher than non-drinkers, respectively. This difference became more prominent among women and those with BMIs of less than 25 kg/m2. Surprisingly, an inverse relationship was noted with wine intake, with those drinking at least one serving per day having a lower serum uric acid level (-0.23 mg/dl; 95% CI -0.48 to -0.03). The mechanism for this protective effect of wine remains unknown. It has been postulated, however, that antioxidants contained in wine, or greater attention to diet and health issues in wine drinkers, may mitigate the potential deleterious effects of alcohol.
Another small study[41] also noted differing effects of different types of alcohol on serum uric acid. Four gout patients were given regular beer, liquor (vodka with orange juice), non-alcoholic beer or orange juice on separate occasions. Patients were monitored for both serum and urine urate levels. The serum uric acid rose significantly only after the ingestion of regular beer. In addition, both regular and non-alcoholic beer reduced the urinary excretion of urate.
In addition to hyperuricemia, the Health Professionals Follow-Up Study[7*] also found that alcohol was associated with an increased risk of developing gout. The risk of gout increased with increasing intake of total alcohol, with the greatest association with beer, followed by spirits. Compared with those who did not drink, men who drank more than two drinks per day had the multivariate RRs of 2.51 and 1.60 for beer and wine, respectively. The risk of gout increased by 1.17 (95% CI 1.11-1.22) per 10 g increase in daily alcohol intake. Similarly to the results from the NHANES III, wine was not association with an increased risk of gout. Beer, unlike most other forms of alcohol, has a high content from malt of the readily absorbable purine guanosine, which can further increase uric acid production. This problem is not avoided by the use of reduced-carbohydrate 'light beer'.
The Impact of Dietary Interventions on Hyperuricemia and Gout
Prior to these recent prospective studies, patients were advised on a relatively unpalatable low-purine, low-protein and alcohol-restricted diet. When compliant, this type of diet is expected to decrease serum uric acid by ~15% (~1-2 mg/dl or 60-120 µmol/l) at a maximum.[10] With the strong association between gout and insulin resistance and less with the amount of total daily protein intake, however, the dietary recommendation has shifted to focus more on weight reduction, with moderate carbohydrate restriction and an increased proportion of protein and unsaturated fats. Foods that are low in purine tend to be higher in carbohydrates, which can increase further insulin resistance. Diets high in monounsaturated fats and low in carbohydrates have been shown to improve insulin sensitivity and lower postprandial glucose, plasma insulin and fasting triglycerides.[42]
A recent small, open-labeled study[8] was conducted to evaluate the impact of low-carbohydrate, calorie-restricted diets, generous in monounsaturated fats, tailored for insulin resistance on the level of serum uric acid and the frequency of gout attacks. Thirteen patients with gout were placed on a 1600 kcal per day diet, comprising 40% carbohydrates, 30% protein and 30% fat, for 16 weeks ( Table 1 ). Participants noted an average of 17 lb weight loss and 17% decrease in serum uric acid (1.67 mg/dl or ~100 µmol/l) without a flare of gout during the study period. Although promising, with improved lipid profile and lowered number of gout attacks, the extent of hyperuricemia achieved with this diet approximated that reached with the traditional low-purine diet. Therefore, the effectiveness of a low-carbohydrate diet still needs evaluation in a randomized, controlled trial.
A growing public interest in weight-loss programs exists. Americans spend more than $33 billion a year on weigh-loss-related products, with up to 44% of women and 29% of men being on some diet at any given time.[8] Forms of the latest popular diet programs include high-protein/high-fat/low-carbohydrate diets, such as Atkins™, South Beach™ and Zone™. In contrast to the American Heart Association's (AHA) recommendation of a diet to be composed of 50-60% carbohydrates, less than 30% fat and 12-18% protein of total daily caloric intake, the unmodified Atkins diet is composed of 5% carbohydrates, 60% fat and 35% protein[43,44] ( Table 1 ). Several considerations regarding the potential effects of these diets on gout exist. These diets encourage patients to take in foods that are rich in purine, such as meat and seafood, which have been associated with a higher risk of gout. Moreover, these diets are high in fat and can induce ketosis and subsequent hyperuricemia, as described above. Interestingly, even the official Atkins website (www.atkins.com) cautions patients about the potential flares of gout with the diet. Unfortunately, to date, there are no controlled studies on the impact of these ketogenic diets on serum uric acid levels and frequency of gout flares. A major question is whether reduction in BMI by such diets outweighs the theoretical risk of induced ketosis in worsening hyperuricemia. In theory, weight-reduction diets that induce less ketosis would be preferred by the practitioner in gout patients, but patient preference and acceptance are critical factors in dietary weight-loss programs.
Conclusion
The US prevalence of gout has risen, particularly in those over the age of 65.[45] Dietary and lifestyle changes, including popularity of diets high in meat and seafood, and the rising consumption of beer may be contributing. Recent studies highlight the importance of dietary measures and alcohol restriction in the prevention of hyperuricemia and gout. The treatment of underlying risk factors remains a key cornerstone in the management of gout. Lifestyle modifications, including dietary intervention, weight loss and reduction of alcohol, can significantly lower the serum uric acid and the risk of developing gout. More importantly, diet in the gout patient can be employed for preventive effects on insulin resistance, hyperlipidemia, atherosclerosis, hypertension and alcoholic liver disease. The most effective forms of dietary regimens for both hyperuricemia and gout flares remain to be identified for patients with gout. Recent open study of a low-carbohydrate and high-protein diet featuring preferential monounsaturates in the fat component of the diet and tailored for insulin resistance, however, appeared promising for patients with gout.
Until a large, controlled study confirms these findings, it is prudent to advise patients to consume meat, seafood and alcoholic beverages in moderation, with special attention to food portion size and content of non-complex carbohydrates which are essential for weight loss and improved insulin sensitivity. Although the total protein intake does not appear to be correlated with hyperuricemia and the risk of gout, given the potential for ketosis induced by currently popular 'low-carb' diets, caution should be exercised during the initiation phase of these diets.
Gout patients should consider fulfilling their protein quota with purine-rich vegetables instead of meat and seafood. Low-fat dairy products and wine appear to be protective for the development of gout in several epidemiological studies. Due to potential confounding variables and ascertainment bias, however, it remains premature to recommend their use (preferential to other foods and beverages) in the management and prevention of gout or hyperuricemia, particularly in the patient with stable gout under pharmacologic control. Heavy consumption of alcohol, in the form of either beer or liquor, however, should be discouraged, as it increases serum uric acid and potentially promotes gout flare, particularly in association with heavy meals. With a better understanding of the impact of these lifestyle modifications, physicians can more effectively educate and motivate patients on non-pharmacological measures to better manage gout.
Top
From Current Opinion in Rheumatology
A Prescription for Lifestyle Change in Patients with Hyperuricemia and Gout
Hyon K. Choi
Posted: 02/26/2010; Curr Opin Rheumatol. 2010;22(2):165 © 2010 Lippincott Williams & Wilkins
Abstract and Introduction
Abstract
Purpose of review This review summarizes the recent data on lifestyle factors that influence serum uric acid levels and the risk of gout and attempts to provide holistic recommendations, considering both their impact on gout as well as on other health implications.
Recent findings Large-scale studies have clarified a number of long-suspected relations between lifestyle factors, hyperuricemia, and gout, including purine-rich foods, dairy foods, various beverages, fructose, and vitamin C supplementation. Furthermore, recent studies have identified the substantial burden of comorbidities among patients with hyperuricemia and gout.
Summary Lifestyle and dietary recommendations for gout patients should consider overall health benefits and risk, since gout is often associated with the metabolic syndrome and an increased future risk of cardiovascular disease (CVD) and mortality. Weight reduction with daily exercise and limiting intake of red meat and sugary beverages would help reduce uric acid levels, the risk of gout, insulin resistance, and comorbidities. Heavy drinking should be avoided, whereas moderate drinking, sweet fruits, and seafood intake, particularly oily fish, should be tailored to the individual, considering their anticipated health benefits against CVD. Dairy products, vegetables, nuts, legumes, fruits (less sugary ones), and whole grains are healthy choices for the comorbidities of gout and may also help prevent gout by reducing insulin resistance. Coffee and vitamin C supplementation could be considered as preventive measures as these can lower urate levels, as well as the risk of gout and some of its comorbidities.
Introduction
A number of epidemiological studies from a diverse range of countries suggest that gout has increased in prevalence and incidence in the last few decades. This is likely explained by trends in lifestyle factors associated with risk of gout.[1,2] Recently, large-scale studies [e.g. the Health Professionals Follow-up Study (HPFS) and the Third National Health and Nutrition Examination Survey (NHANES III)] have clarified a number of long-suspected relations between lifestyle factors, hyperuricemia, and gout.[3–14] These studies confirmed some of the long-purported dietary risk factors for hyperuricemia and gout – meat, seafood, beer, liquor, adiposity, weight gain, hypertension and diuretic use. Other putative risk factors, such as protein and purine-rich vegetables were exonerated, and a potential protective effect of dairy products was newly identified.[3,8] Subsequently, several novel factors that had not been included in traditional lifestyle recommendations have been identified, including major offending factors like fructose and sugar-sweetened soft drinks[13,15•,16••,17•] and protective factors such as coffee[11,18] and vitamin C supplements.[7,12,19••]
Applying our knowledge on these risk factors for gout into clinical and public health practice requires considering the health impact of these factors on the frequent comorbidities of hyperuricemia and gout. This consideration is particularly relevant because a number of major cardiovascular–metabolic conditions often co-occur in these patients.[5,9,20–25] In an extreme scenario, if a certain lifestyle modification can reduce the risk of recurrent gout, but would also contribute to the risk of a major health outcome such as acute myocardial infarction or premature death, it would be difficult to justify the long-term implementation of such a modification among gout patients, particularly given their comorbidities and increased risk of cardiovascular disease (CVD).
The current review summarizes an update on the most recent data on lifestyle factors and the risk of gout and attempts to provide holistic recommendations, considering both their impact on the risk of gout as well as other potential health implications.
Pathophysiologic Considerations in Lifestyle Recommendations for Gout
The amount of urate in the body, the culprit in the pathogenesis of gout, depends on the balance between dietary intake, synthesis, and excretion.[2] Hyperuricemia results from the overproduction of uric acid (10%), underexcretion of uric acid (90%), or often a combination of the two.[2] Thus, while lifestyle factors such as oral purine load can contribute to uric acid burden and the risk of gout to a certain level, factors that can affect renal uric acid excretion or both production and excretion would likely have a substantially higher impact on uric acid burden and the risk of gout. The first line of lifestyle factors (e.g. meat, seafood, alcohol, fructose-rich food) that affects serum uric acid levels can acutely lead to the risk of urate crystal formation and gout attacks. In comparison, the latter factors that affect insulin resistance (e.g. adiposity, dairy intake, coffee, fructose) and the renal excretion of urate can affect uric acid levels[26–28] and the risk of gout in a long-term manner. Traditional lifestyle approaches have almost exclusively focused on acute gout prevention with the first line of risk factors. However, since the insulin resistance syndrome is a highly prevalent comorbidity among gout patients[9,10] (see Table 1 and below for detail) and has severe cardiovascular–metabolic consequences (Table 2),[5,20–25,29–33] it is important to consider the factors that can improve insulin resistance, particularly in long-term lifestyle recommendation.
Although gout's cardinal feature is inflammatory arthritis, gout is a metabolic condition associated with elevated uric acid burden.[2,34] A number of associated cardiovascular–metabolic conditions have been identified, including increased adiposity,[5,9] hypertension,[5,29] dyslipidemia,[9] insulin resistance,[9,25] hyperglycemia,[9,25] certain renal conditions[25,35••] and atherosclerotic cardiovascular disorders[20–22,23•] ( Table 1 and Table 2 ). Recent studies have quantified the magnitude of these associations with these comorbid conditions. For example, in a representative sample of US adult men and women (NHANES III), the prevalence of the metabolic syndrome, as defined by the revised National Cholesterol Education Program Adult Treatment Panel III (NCEP/ATP III), was 63% among US adults with gout and 25% among individuals without gout.[9] Previous hospital-based case series reported that the prevalence of the metabolic syndrome was 82% in Mexican men[36] and 44% in Korean men.[37] These quantitative population data indicate that the prevalence of the metabolic syndrome is remarkably high among individuals with hyperuricemia and gout. Correspondingly, the prevalence of the metabolic syndrome increased substantially with increasing levels of serum urate, from 19% for serum urate levels less than 6 mg/dl to 71% for levels of 10 mg/dl or greater.[10]
The prevalence of other major related comorbidities is summarized in Table 2 . Whereas more than 50% of gout patients have hypertension,[5,9,29] coronary artery disease (CAD) has been observed in 25% of gout patients in the UK[30] and 18% among US health professional men with gout.[22] Overweight and obesity have been estimated to be 71 and 14%, respectively, in US health professional men with gout,[5] whereas obesity has been reported to be as high as 28% in a UK general practitioner's population.[30] The association with diabetes was generally weak with its prevalence of 6% of gout patients in the UK[31] and 5% among male health professionals with gout.[22] The prevalence of kidney stones was 15% among health professional men with gout[32] and renal insufficiency was 5% in the US general population[33] ( Table 2 ).
These cross-sectional associations have been consistently translated into increased future risk of relevant cardiovascular–metabolic sequelae. For example, the Framingham Study found that gout was associated with a 60% increased risk of CAD in men, which was not explained by clinically measured risk factors.[38] Also, in the Multiple Risk Factor Intervention Trial (MRFIT), participants with a history of gout had a 26% increased independent risk of myocardial infarction,[21] a 33% increased risk of peripheral arterial disease,[39] and a 35% increased risk of coronary heart disease (CHD) mortality.[40••] Similarly, the HPFS cohort showed 59% increased risk for nonfatal myocardial infarction and 55% for fatal myocardial infarction.[22] Furthermore, in the HPFS, men with gout had a 28% higher risk of death from all causes, a 38% higher risk of death from CVD, and a 55% higher risk of death from CHD.[22] Finally, an analysis based on the MRFIT data showed that men with gout had a 41% increased risk for incident type 2 diabetes.[23•]
These comorbidities of gout and independent associations with future risk of CVD and mortality add to the overall burden of gout, and provide strong support for serious consideration of these issues in determining appropriate lifestyle recommendations for gout patients.
Low-purine Diet vs. a Dietary Approach against the Metabolic Syndrome
Conventional dietary approach limits drinks or foods that are known to potentially precipitate an acute gouty attack, such as large servings of meat and heavy beers.[34] However, a rigid purine restricted diet has been thought to be of dubious therapeutic value and can rarely be sustained for long.[34] Furthermore, low-purine foods are often rich in both refined carbohydrates (including fructose) and saturated fat.[34,41] These tend to further decrease insulin sensitivity, leading to higher plasma levels of insulin, glucose, triglycerides, and LDL-C levels, and decreased HDL-C levels, thereby furthering the risk of the metabolic syndrome and its complications in these patients.[34,41] In contrast, a diet aimed to lower insulin resistance can not only improve uric acid levels[41] but also improve insulin sensitivity and decrease plasma glucose, insulin, and triglyceride levels, which could lead to a reduction in the incidence and mortality of CVD.[34] Of note, the HPFS data on incident gout are largely consistent with the lifestyle recommendations against insulin resistance. For example, weight loss,[5] higher dairy intake,[3] lower fructose intake,[16••] and higher coffee intake[18] that are known to reduce insulin resistance have all been found to be protective against the risk of developing new cases of gout (Table 3).[42–45]
Furthermore, the fact that previously tabooed items such as purine-rich vegetables, nuts, legumes, and vegetable protein, despite their high purine content, are not associated with an increase in gout risk[5] also supports their overall beneficial effects in gout patients likely through lowering insulin resistance. In fact, individuals who consumed a larger amount of vegetable protein (the highest quintile) had a 27% lower risk of gout compared with the lowest quintile.[3] These approaches can not only lead to lower uric acid levels and the risk of gout in the long run, but can also lower the major consequences of insulin resistance.[5,9,20–25] In other words, the overall risk–benefit ratio of the diet approach against the metabolic syndrome would likely yield a more favorable net outcome in the long run than the traditional low-purine diet. Furthermore, as compared with the less palatable low-purine diet,[46] a dietary approach against the metabolic syndrome may achieve higher long-term compliance. A formal comparison of these dietary approaches would be valuable.
A Prescription for Lifestyle Change in Patients with Hyperuricemia and Gout
The goals of lifestyle modifications are to help prevent both gout attacks and complications of gout and its comorbidities, including cardiovascular–metabolic sequelae and premature deaths. Thus, if certain factors can help prevent both recurrent gout attacks and other major health consequences, such measures would be highly preferred. In contrast, if certain factors can reduce the risk of recurrent gout, but can increase the risk of major health outcomes such as CVDs, type 2 diabetes, or cancer, it would be difficult to justify the long-term implementation of such measures among gout patients, particularly those with comorbidities. A simple pharmacologic analogy of this could be the use of low-dose aspirin. Even though low-dose aspirin can increase the risk of gout, it is difficult to justify stopping this medication given its cardioprotective benefits. The same holistic risk–benefit consideration would be needed in determining appropriate lifestyle recommendations for gout patients.
This leads us to consider the identified lifestyle risk factors of gout within a healthy lifestyle paradigm geared to prevent other common major disorders such as CVD, type 2 diabetes, and certain types of cancers.[47] Figure 1 summarizes this integration of the impacts of identified factors on the risk of gout into a recent dietary recommendation for the general public (i.e. Healthy Eating Pyramid).[47] As discussed above, most of the identified factors affect the risk of gout in the same direction as other major health outcomes, whereas the potential exceptions include seafood, sugary fruits, and alcoholic beverages. Below, each identified risk factor (Table 3) is discussed with a holistic risk–benefit recommendation considering both the risk of gout and other major health outcomes (Fig. 1).[47]
Click to zoom
(Enlarge Image)
Figure 1.
Dietary impact on the risk of gout and their implications within a healthy eating guideline pyramid
Exercise daily and reduce weight, as increased adiposity is associated with higher uric acid levels and an increased future risk of gout, whereas weight loss is associated with lower uric acid levels and a decreased risk of gout.[5,14,42] The healthy eating pyramid strongly recommends daily exercise and weight control by placing them at the foundation of the pyramid, as obesity is associated with many important health outcomes, including CHD,[48,49] hypertension,[50] type 2 diabetes,[51,52] kidney stones,[53] and gallstones.[54] Many patients with gout are overweight or obese, and weight reduction through gradual caloric restriction and exercise can substantially help lower uric acid levels and the risk of gout attacks,[41] in addition to its beneficial effects on associated cardiovascular-metabolic commodities and sequelae.
Limit red meat intake as it is associated with higher uric acid levels and increased future risk of gout.[3,8] The mechanism behind this increased risk may be multifactorial. The urate-raising effect of artificial short-term loading of purified purine has been well demonstrated by metabolic experiments in animals and humans.[55–58] Further, red meat is the main source of saturated fats, which are positively associated with insulin resistance,[59,60] which reduces renal excretion of urate.[26,27,41,61] These fats also increase LDL cholesterol levels more than HDL cholesterol, creating a negative net effect. Higher levels of these fats or red meat consumption have been linked to major disorders such as coronary artery disease, type 2 diabetes, and certain types of cancer.
Tailor seafood intake to the individual taking into account cardiovascular comorbidities, and consider omega-3 fatty acid supplements. Seafood intake has been linked to higher serum uric acid levels and increased future risk of gout, which is likely due to its high purine contents.[3,8] Increased intake of oily fish, other fish, and shellfish was associated with an increased risk of gout. However, given the apparent cardiovascular benefits from fish products,[62] particularly oily fish that are rich in omega-3 fatty acids, it would be difficult to justify a recommendation to avoid all fish intake considering only the risk of gouty flares. Oily fish[62] may be allowed while implementing other lifestyle measures, particularly among gouty patients with cardiovascular comorbidities. Furthermore, among patients with gout or hyperuricemia, the use of plant-derived omega-3 fatty acids or supplements of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) could be considered in the place of fish consumption. Further, diets enriched in both linolenic acid and EPA significantly suppress urate crystal induced inflammation in a rat model[34,63] raising an intriguing potential protective role of these fatty acids against gout flares.
Drink skim milk or consume other low-fat dairy products up to two servings daily. Low-fat dairy consumption has been inversely associated with serum uric acid levels and also with a decreased future risk of gout.[3,8] Furthermore, low-fat dairy foods have been linked to a lower incidence of CHD,[64] premenopausal breast cancer,[65] colon cancer,[66] and type 2 diabetes.[67] Finally, low-fat dairy foods have been one of the main components of the dietary approaches to stop hypertension (DASH) diet that has been shown to substantially lower blood pressure.[68]
Consume vegetable protein, nuts, legumes, and purine-rich vegetables, as they do not increase the risk of gout[3] and these food items (especially, nuts and legumes) are excellent sources of protein, fiber, vitamins, and minerals. In fact, individuals who consumed vegetable protein in the highest quintile of intake actually had a 27% lower risk of gout compared with the lowest quintile.[3] Furthermore, nut consumption is associated with several important health benefits including a lower incidence of CHD,[69,70] sudden cardiac deaths,[71] gallstones,[72,73] and type 2 diabetes.[74] Legumes or dietary patterns with increased legume consumption have been linked to a lower incidence of coronary heart disease,[75–77] stroke,[78] certain types of cancer,[79,80] and type 2 diabetes.[81] The recent healthy eating pyramid recommends 1–3 times daily consumption of nuts and legumes (Fig. 1),[47] which appears readily applicable among patients with gout or hyperuricemia.
Reduce alcoholic beverages, particularly if you are drinking more than a moderate level (i.e. one to two drinks per day for men, and no more than one drink per day for women), as these beverages, particularly beer and liquor, have been associated with higher uric acid levels and increased risk of gout.[4,6] The overall health benefits of sensible moderate drinking (1–2 drinks/day for men and ?1 drink/day for women) likely outweigh the risks, as more than 60 prospective studies have consistently indicated that moderate alcoholic consumption is associated with a 25–40% reduced risk for CHD.[82,83] Also, prospective studies suggest a similar protective effect against other CVD and deaths.[82] These benefits may be particularly relevant to middle-aged men,[82,83] the demographic in whom gout occurs most often. However, starting drinking is not generally recommended, since similar benefits can be achieved with exercise or healthier eating.[84] These other health effects of moderate drinking may be considered in advising about alcohol intake to patients with existing gout or at a high risk of developing gout.
Limit sugar-sweetened soft-drinks and beverages, as fructose contained in these beverages increases serum uric acid levels and the risk of gout.[13,15•,16••,17•] Furthermore, fructose intake has been linked to increased insulin resistance,[85] a positive energy balance,[86,87] weight gain, obesity,[88–90] type 2 diabetes,[91,92] an increased risk of certain cancers,[93–95] and symptomatic gallstone disease.[96] So unlike moderate consumption of alcoholic beverages, the multiple health benefits are expected by reducing or eliminating sugary soft drinks from the diet of gouty patients. Sweet fruits (i.e. apples and oranges) have also been linked to hyperuricemia and the risk of gout.[13,15•,16••,17•] However, given the other health benefits of these food items,[97,98] it appears difficult to justify restricting these items even among gout patients.
Allow coffee drinking if already drinking coffee as both regular and decaffeinated coffee drinking have been associated with lower uric acid levels and a decreased risk of gout.[11,18] In addition, coffee drinking has been linked to a lower risk of type 2 diabetes,[99–101] kidney stones,[102,103] symptomatic gallstone disease,[104,105] and Parkinson's disease.[106] However, caffeine tends to promote calcium excretion in urine, and drinking a lot of coffee, about four or more cups per day, may increase the risk of fractures among women.[107] Caffeine, being a xanthine (i.e. 1,3,7-trimethyl-xanthine), likely exerts a protective effect against gout similar to allopurinol through xanthine oxidase inhibition.[7,108] This means that intermittent use of coffee or acute introduction of a large amount coffee may trigger gout attacks as allopurinol introduction does. So if a patient with gout chooses to try coffee intake to help reduce uric acid levels and the risk of gout, its initiation may need to be similar to that of allopurinol.
Consider taking vitamin C supplements as it has been found to reduce serum uric acid levels in clinical trials[7,43–45] and has recently been linked to a reduced future risk of gout.[19••] Whereas these data suggest that total vitamin C intake of 500 mg/day or more is associated with a reduced risk, the potential benefit of lower intake remains unclear. Furthermore, potential cardiovascular benefit of vitamin C[109] may also be relevant among gout patients, because of their increased risk of cardiovascular morbidity and mortality.[22,40••] Given the general safety profile associated with vitamin C intake, particularly within the generally consumed ranges (e.g. tolerable upper intake level of vitamin C <2000 mg in adults according to the Food and Nutrition Board, Institute of Medicine),[110] vitamin C may provide a useful option in the prevention of gout.
Dietary impact on the risk of gout and their implications within a healthy eating guideline pyramid
Conclusion
Lifestyle and dietary recommendations for gout patients should consider other health benefits and risk, since gout is often associated with major chronic disorders such as the metabolic syndrome and an increased risk for CVD and mortality. Weight reduction with daily exercise and limiting intake of red meat and sugary beverages would help reduce uric acid levels, the risk of gout, insulin resistance, and comorbidities. Heavy drinking should be avoided, whereas moderate drinking, sweet fruits, and seafood intake, particularly oily fish, should be tailored to the individual, considering their anticipated health benefits against CVD. Alternatively, the use of plant-derived omega-3 fatty acids or supplements of EPA and DHA could be considered instead of fish consumption. Vegetable and dairy protein, nuts, legumes, fruits (less sugary ones), and whole grains are healthy choices against various comorbidities of gout and would not increase the risk of gout, and may even help lower the risk of gout by reducing insulin resistance. Coffee can be allowed, if it is already being consumed, and vitamin C supplementation can be considered, as both can lower serum urate levels, as well as the risk of gout and some of its comorbidities.
Top
From Faculty of 1000
Soft Drinks, Fructose Consumption, and the Risk of Gout in Men: Prospective Cohort Study: Ranked "Changes Clinical Practice" by F1000
Hyon K. Choi; Gary Curhan
Posted: 04/22/2009
Abstract and Introduction
Abstract
Objective: To examine the relation between intake of sugar sweetened soft drinks and fructose and the risk of incident gout in men.
Design: Prospective cohort over 12 years.
Setting: Health professionals follow-up study.
Participants: 46 393 men with no history of gout at baseline who provided information on intake of soft drinks and fructose through validated food frequency questionnaires.
Main Outcome Measure: Incident cases of gout meeting the American College of Rheumatology survey criteria for gout.
Results: During the 12 years of follow-up 755 confirmed incident cases of gout were reported. Increasing intake of sugar sweetened soft drinks was associated with an increasing risk of gout.Compared with consumption of less than one serving of sugar sweetened soft drinks a month the multivariate relative risk of gout for 5-6 servings a week was 1.29 (95% confidence interval 1.00 to 1.68), for one serving a day was 1.45 (1.02 to 2.08),and for two or more servings a day was 1.85 (1.08 to 3.16; P for trend=0.002). Diet soft drinks were not associated with risk of gout (P for trend=0.99). The multivariate relative risk of gout according to increasing fifths of fructose intake were 1.00, 1.29, 1.41, 1.84, and 2.02 (1.49 to 2.75; P for trend<0.001). Other major contributors to fructose intake such as total fruit juice or fructose rich fruits (apples and oranges)were also associated with a higher risk of gout (P values for trend <0.05).
Conclusions: Prospective data suggest that consumption of sugar sweetened soft drinks and fructose is strongly associated with an increased risk of gout in men. Furthermore, fructose rich fruits and fruit juices may also increase the risk. Diet soft drinks were not associated with the risk of gout.
Introduction
Gout is the most common inflammatory arthritis in men.[1] The overall burden from this disease remains substantial and is growing.[3] Identifying the risk factors that are modifiable with available measures is an important first step in the prevention and management of this painful condition.[4] The doubling of the prevalence[5] and incidence[6] of gout over the past few decades in the United States[3,4] coincided with a substantial increase in the consumption of soft drinks and fructose.[7] For example,soft drink consumption in the US increased by 61% in adults from 1977 to 1997,[7] and sugar sweetened soft drinks represent the largest single food source of calories in the US diet.[7,8] Fructose consumption has also increased dramatically since the introduction of commercially produced high fructose corn syrup in 1967,[9] and its yearly per capita use has increased from 0 kg to 29 kg,[10-12] whereas naturally occurring fructose consumption has remained relatively stable.[13]
Conventional dietary recommendations for gout have focused on restriction of purine and alcohol intake but with no restriction of sugar sweetened soft drinks.[14,15] Although such soft drinks contain low levels of purine they contain large amounts of fructose,which is the only carbohydrate known to increase uric acid levels.[12,16-19] In humans, acute oral or intravenous administration of fructose results in a rapid increase in serum levels of uric acid through accentuated degradation of purine nucleotides[16]and increased purine synthesis.[20,21] This urate raising effect was found to be exaggerated in people with hyperuricaemia[18]or a history of gout.[17] It is unknown, however, if this acute effect is sustained on a long term basis and eventually translates into an increased risk of gout. We prospectively evaluated the relation between intake of sugar sweetened soft drinks and fructose and the incidence of gout in a cohort of 46 393 men with no history of gout.
Methods
The health professionals follow-up study is an ongoing longitudinal study of 51 529 male dentists, optometrists, osteopaths, pharmacists,podiatrists, and veterinarians. The men are predominantly white(91%) and were aged 40 to 75 years in 1986. The participants returned a mailed questionnaire in 1986 on diet, medical history,and drugs. Of the 49 166 men who provided complete information on intake of sugar sweetened soft drinks, 2773 (5.6%) reported a history of gout on the baseline questionnaire. We excluded these prevalent cases at baseline from this analysis. The follow-up rate exceeded 90% for each two year period. Participants who failed to respond to a questionnaire during one follow-up cycle were not removed from the study; they were included in the next mailing of the questionnaire (they could skip answering a questionnaire but then answer the next).
Assessment of Dietary Intake
To assess dietary intake including that of soft drinks we used a validated food frequency questionnaire that inquired about the average use of more than 130 foods and beverages during the previous year.[2,22,23] The baseline dietary questionnaire was completed in 1986 and was updated every four years. On all questionnaires participants were asked how often on average during the previous year they had consumed sugar sweetened soft drinks ("Coke, Pepsi, or other cola with sugar," "caffeine-free Coke, Pepsi, or other cola with sugar," and "other carbonated beverages with sugar") and diet soft drinks ("low-calorie cola with caffeine," "low-calorie caffeine-free cola," and "other low-calorie beverages"). We also assessed different types of fruits and fruit juices. We summed the intake of single items to create a total for consumption of sugar sweetened soft drinks,diet soft drinks, and fruit juice.
Participants could choose from nine frequency responses (never,1-3 a month, 1 a week, 2-4 a week, 5-6 a week, 1 a day, 2-3 a day, 4-5 a day, and ?6 a day). We computed nutrient intakes by multiplying the frequency response by the nutrient content of the specified portion sizes.[23] Values for nutrients were derived from the US Department of Agriculture sources[24] and supplemented with information from manufacturers.
Fructose is a monosaccharide. Half of the disaccharide sucroseis fructose, which is split from sucrose in the small intestine.Therefore total fructose intake is equal to the intake of free fructose plus half the intake of sucrose. In this cohort at baseline orange juice, sugar sweetened soft drinks, apples,raisins, and oranges contributed 54.2% of monosaccharide fructose(15.9%, 15.5%, 14.5%, 5.2%, 3.2%, respectively). Food intake assessed by this dietary questionnaire has been validated previously against two one week diet records in this cohort.[22,25] Specifically,the correlation coefficients between questionnaires and diet records were 0.84 for sugar sweetened cola, 0.73 for diet cola,0.55 for other sugar sweetened soft drinks, 0.74 for other diet soft drinks, 0.78 for orange juice, 0.70 for apples, 0.76 for oranges, 0.59 for raisins, and 0.89 for other fruit juices.[25]
Assessment of Non-dietary Factors
At baseline and every two years the participants provided information on weight, regular use of drugs (including diuretics), and medical conditions (including hypertension and chronic renal failure).[26]We calculated body mass index by dividing weight in kilograms by the square of the height in metres.
Ascertainment of Incident Cases of Gout
We ascertained incident cases of gout using the survey criteria of the American College of Rheumatology, as previously described.[2]Briefly, on each biennial questionnaire participants indicated whether they had received a diagnosis of gout from a doctor.We mailed a supplementary questionnaire to those participants with self reported incident gout diagnosed from 1986 onwards to confirm the report and to ascertain the American College of Rheumatology criteria for gout.[2,27] Our primary end point was incident cases of gout that met six or more of the 11 criteria for gout.[2,27] To confirm the validity of the criteria in our cohort we reviewed the medical records from a sample of 50 of the men who had reported having gout. The concordance rate between the criteria and the medical record review was 94% (47/50).[2]We further evaluated the robustness of our results by using other outcome definitions for gout, including self reported gout diagnosed by a doctor (most sensitive) and cases that reported a tophus or crystal proved gout (most specific).
Statistical Analysis
We computed person time of follow-up for each participant from the return date of the 1986 questionnaire to the date of diagnosis of gout, death from any cause, or the end of the study period(1998), whichever came first. Men who had reported having gout on previous questionnaires were excluded from subsequent follow-up.
To represent long term dietary intake patterns of individual participants we used cumulative average intakes on the basis of the information from questionnaires completed in 1986, 1990,and 1994.[2,28-30] For example, the incidence of gout from 1986 to 1990 was related to the soft drink intake reported on the 1986 questionnaire, and incidence from 1990 to 1994 was related to the average intake reported on the 1986 and 1990 questionnaires.
We used Cox proportional hazards modelling (PROC PHREG) to estimate the relative risk for incident gout in all multivariate analyses(SAS Institute). For these analyses we categorised soft drink consumption into six frequency groups: less than 1 serving a month, 1 serving a month to 1 a week, 2-4 servings a week, 5-6 servings a week, 1 serving a day, and 2 or more servings a day.We categorised free fructose and total fructose intake into fifths for percentage of energy (nutrient density[31]). Multivariate models for soft drink consumption were adjusted for age (continuous),total energy intake (continuous), alcohol intake (seven categories),body mass index (six categories), use of diuretics (thiazide or furosemide (frusemide)) (yes or no), history of hypertension(yes or no), history of chronic renal failure (yes or no), and average daily intake of meats, seafood, purine rich vegetables,dairy foods, and total vitamin C (fifths).[2,26,28] We evaluated the potential impact of coffee intake,[32] caffeine intake, and fructose intake by entering each term (five categories for coffee intake and fifths for the others) into the multivariate model for soft drink consumption. In multivariate nutrient density models for fructose intake,[31] we simultaneously included energy intake, the percentages of energy derived from protein and carbohydrate(or non-fructose carbohydrate), intake of vitamin C and alcohol,and other non-dietary variables. The coefficients from these models can be interpreted as the estimated effect of substituting a specific percentage of energy from fructose for the same percentage of energy from non-fructose carbohydrate (or fat).[31]
We assessed trends in gout risk across categories of soft drink or fructose intake in Cox proportional hazards models by using the median values of intake for each category to minimise the influence of outliers. To assess possible effect modification we did analyses stratified by body mass index (<25 kg/m2 v ?25 kg/m2), alcohol use (yes or no), and dairy intake (?1.6 servings/day (median value) v >1.6 servings/day). We tested the significance of the interaction with a likelihood ratio test by comparing a model with the main effects of each intake and the stratifying variable and the interaction terms with a reduced model with only the main effects. For all relative risks we calculated 95% confidence intervals. P values are two sided.
Results
During 12 years of follow-up of 46 393 eligible men from the health professionals follow-up study, we documented 755 newly diagnosed cases of gout meeting the American College of Rheumatology criteria. Table 1 shows the characteristics of the cohort according to baseline levels of sugar sweetened soft drinks and free fructose consumption. With increasing consumption of sugar sweetened soft drinks the intake of caffeine, fructose, meats, and high fat dairy foods tended to increase whereas mean age and low fat dairy intake tended to decrease (Table 1). With increasing consumption of free fructose the body mass index and intake of alcohol, caffeine, meats, and high fat dairy foods tended to decrease (Table 1).
Sugar Sweetened Soft Drinks and Incident Gout
Increasing intake of sugar sweetened soft drinks was associated with an increasing risk of gout (Table 2). Compared with the reference consumption level of less than one serving a month,the multivariate relative risk of gout for 5-6 servings a week was 1.29 (95% confidence interval 1.00 to 1.68), for one serving a day was 1.45 (1.02 to 2.08), and for two or more servings a day was 1.85 (1.08 to 3.16; P for trend 0.002). In contrast,diet soft drinks were not associated with risk of gout (P for trend 0.99). When additional adjustments were made for caffeine or coffee intake, these results did not change materially. After adjusting for fructose in intakes of fifths, however, the association between the intake of sugar sweetened soft drinks and risk of gout was attenuated and no longer significant (P for trend 0.10).
Fructose Intake and Incident Gout
Increasing fructose intake was associated with increasing risk of gout (Table 3). Compared with men in the lowest fifth of free fructose intake, the multivariate relative risk of gout in the highest fifth when substituting fructose for the equivalent energy from fat was 1.81 (95% confidence interval 1.38 to 2.38;P for trend <0.001). The corresponding relative risk increased after adjustment for total carbohydrate intake to reflect the substitution effect of fructose for other types of carbohydrates(multivariate relative risk 2.02, 1.49 to 2.75; P for trend<0.001). Similarly, higher total fructose intake was significantly associated with increasing risk of gout (P for trend ?0.001;Table 3). When fructose intake was used as a continuous variable,the multivariate relative risk for a 5% increment in energy from free fructose, as compared with equivalent energy intake from other types of carbohydrates, was 2.10 (1.53 to 2.77) and the corresponding relative risk for total fructose was 1.52(1.23 to 1.88).
Among other foods and beverages contributing fructose, total fruit juice intake was associated with risk of gout (Table 4).Compared with men who consumed less than a glass of fruit juice a month, the multivariate relative risk for gout in those consuming two or more glasses a day was 1.81 (95% confidence interval 1.12 to 2.93; Table 4). The corresponding multivariate relative risk for orange juice or apple juice was 1.82 (1.11 to 3.00).Similarly, intake of oranges or apples was associated with risk of gout. Compared with men who consumed less than one apple or orange a month, the multivariate relative risk of gout in those who consumed one apple or orange a day was 1.64 (1.05 to 2.56). The corresponding multivariate relative risk for orange intake alone was 1.55 (1.02 to 2.36) and for apple intake alone was 1.48 (0.98 to 2.25). No other individual fructose rich food items were associated with risk of gout, although their frequency of consumption was relatively low.
Risk According to Body Mass Index, Alcohol Use, and Dairy Intake
Stratified analyses were done to evaluate whether the association between consumption of sugar sweetened soft drinks and fructose and risk of gout varied according to body mass index, alcohol use, and dairy intake. Relative risks from these stratified analyses consistently suggested associations similar to those from the main analyses, and no significant interaction was found with these variables (all P values for interaction >0.63;figure).
Other Definitions of Dietary Exposure and Gout
When analyses were repeated using baseline dietary intake (1986 questionnaire) and updated dietary intakes every four years without cumulative averaging, the results remained significant.The multivariate relative risk between the extreme fifths of free fructose substituting for other carbohydrates with baseline dietary intake was 1.81 (95% confidence interval 1.36 to 2.41;P for trend <0.001) and with updated information without cumulative averaging was 1.93 (1.44 to 2.60; P for trend <0.001).The corresponding multivariate relative risks for substituting free fructose for fat were 1.59 (1.23 to 2.06; P for trend <0.001)and 1.77 (1.35 to 2.31; P for trend <0.001).
With other case definitions of gout, the magnitudes of associations tended to increase as specificity of the case definition increased,but null associations remained null. For example, as the definition became more specific, going from self reported gout (n=1676),to gout defined by the American College of Rheumatology criteria(n=755), to tophaceous or crystal proved gout (n=124), the multivariate relative risks between the extreme fifths of free fructose intake substituting for other carbohydrates were, respectively, 1.65(1.35 to 2.01), 2.02 (1.49 to 2.75), and 2.25 (1.03 to 4.93).The corresponding multivariate relative risks for substituting free fructose for fat were 1.54 (1.29 to 1.85), 1.81 (1.38 to 2.38), and 2.23 (1.10 to 4.54).
Discussion
In this large prospective study of men we found that the risk of incident gout increased with increasing intake of sugar sweetened soft drinks. The risk was significantly increased with an intake level of 5-6 servings a week and the risk rose with increasing intake. The risk of incident gout was 85% higher among men who consumed two or more servings of sugar sweetened soft drinks daily compared with those who consumed less than one serving monthly. In contrast, diet soft drinks were not associated with the risk of incident gout. Furthermore, the risk of gout was significantly increased with increasing fructose intake; the risk of gout was about twice as high among men in the highest fifth of free fructose consumption than among men in the lowest fifth. These associations were independent of dietary and other risk factors for gout such as body mass index, age, hypertension,diuretic use, alcohol intake, and history of chronic renal failure.The current study provides prospective evidence that fructose and fructose rich foods are important risk factors to be considered in the primary prevention of gout in men.
We found that the risk of incident gout associated with fructose or fructose rich foods was substantial. For example, the risk of gout posed by the highest fifth of fructose intake was comparable to that seen with alcohol intake of 30 g to 50 g daily reported in this cohort (relative risk 1.96).[28] Similarly, the magnitudes of risk posed by sugar sweetened soft drinks or fruit juices were slightly larger than that of spirits (relative risk for?2 servings a day, 1.60) in the same cohort.[28] Furthermore, the increased risk of gout per serving was comparable to individual alcoholic beverages (35% for sugar sweetened soft drink and 49% and 15% for beer and spirits[28]). Because the urate raising effect of fructose is greatest in patients with gout or hyperuricaemia[16-19] our findings may be even more relevant in those patients.
Interestingly, fructose shares ethanol's urate raising mechanism that induces uric acid production by increasing ATP degradation to AMP, a precursor of uric acid.[4,16,21,33,34] Fructosephosphorylation in the liver uses ATP, and the accompanying phosphate depletion limits regeneration of ATP from ADP, which in turn serves as substrate for the catabolic pathway to uric acid formation.[35] Thus minutes after an infusion of fructose,plasma (and later urinary) uric acid concentrations are increased.[16]In conjunction with purine nucleotide depletion, rates of purine synthesis de novo are accelerated, thus potentiating uric acid production.[20] In contrast, glucose and other simple sugars do not have the same effect.[12]
Furthermore, fructose could indirectly increase the level of serum uric acid and the risk of gout by increasing insulin resistance and circulating insulin levels.[13] Experimental studies in animal models and from short term feeding trials among humans suggest that higher fructose intake contributes to insulin resistance,impaired glucose tolerance, and hyperinsulinaemia.[36-39]For example, rats fed a diet containing 35% of energy as fructose for four weeks developed reduced insulin sensitivity and whole body glucose disposal, whereas comparable amounts of starch had no observable effects.[37] In humans, reductions in insulin binding and insulin activity were observed among healthy people fed 1000 extra kilocalories as fructose for seven days, whereas intake of 1000 extra kilocalories as glucose had no similar adverse effects.[40] Likewise, in another study in humans,[41] intake of 15% of total energy as fructose for five weeks resulted in higher insulin and glucose responses than isocaloric diets with 7.5% of energy from fructose or no fructose. Additionally, an increase in fructose consumption often leads to positive energy balance, which may contribute to excess adiposity.[42,43] Excess adiposity is associated with a higher concentration of non-esterifiedfatty acids,[44] which might reduce insulin sensitivity by increasing the intramyocellular lipid content in muscle cells where insulin receptors are located.[13]
Public Health Implications
Our results have important practical implications. Over 100 years ago Osler prescribed diets low in fructose as a means to prevent gout.[12] He wrote in his 1893 text[45] that "The sugar should be reduced to a minimum. The sweeter fruits should not be taken."[12] Conventional dietary recommendations for gout have,however, focused on restriction of purine intake, although low purine diets are often high in carbohydrates, including fructose rich foods.[14] Our data provide prospective evidence that fructose poses a substantial risk for gout, thus strongly supporting the validity and importance of Osler's approach. These data even suggest that the risk posed by free fructose intake could be at least as large as that by purine rich foods such as total meat consumption (relative risk between extreme fifths of intake 1.41[26]). Thus the conventional low purine diet approach allowing fructose intake could potentially worsen the overall net risk of gout attacks. Furthermore, because fructose intake is associated with increased serum insulin levels, insulin resistance,and increased adiposity [9,36-39,46] the overall negative health impact from fructose is expected to be larger particularly in patients with gout, who often have the metabolic syndrome(63%[47]) and are overweight (71%[26]). Conversely, the conventional low purine diet allowing fructose intake could have contributed to the high prevalence of metabolic syndrome observed in cross sectional studies.[47-49] None the less, these findings support the importance of recommending a reduction in fructose intake in patients with hyperuricaemia and gout in order to reduce the risk of gout as well as to improve overall long term outcomes.Correspondingly, prospective cohort data indicate that higher consumption of sugar sweetened drinks is associated with excess adiposity and risk of type 2 diabetes.[50,51] In contrast, higher consumption of fruits (and vegetables) is associated with a lower risk of chronic disorders, including coronary heart disease,[52,53] stroke,[54] certain types of cancer,[55] cataract,[56] and age related macular degeneration.[57,58] Furthermore, an increased intake of fruit and vegetables is one of the main components of the dietary approaches to stop hypertension (DASH) diet,which has been shown to substantially lower blood pressure.[59,60] Thus the latest dietary guidelines call for five to 13 servings of fruits and vegetables a day, depending on an individual's caloric intake.[61] These various benefits and risks associated with individual fructose rich food items should be carefully considered in the potential public health applications of our findings.
Strengths and Limitations
Our study has several strengths and potential limitations. Our study was substantially larger than previous studies on gout,[1-21,62-66] and we prospectively collected and validated the dietary data. We avoided potential biased recall of diet because the data on intake were collected before the diagnosis gout. Because dietary consumption was self reported by questionnaire,some misclassification of exposure is inevitable. The food frequency questionnaire has been extensively validated in a subsample of this cohort, however, and any remaining misclassification would have likely biased the results towards the null. The use of repeated dietary assessments in the analyses not only accounts for changes in dietary consumption over time but also decreases measurement error.[22,25] As in other epidemiological studies of gout,[1,62-65] our primary definition of gout did not require observation of urate crystals in joint fluid. Although the presence of a tophus or urate crystals in joint fluid would be diagnostic of gout,[27] the sensitivity of these findings is too low, especially in a study population such as ours because arthrocentesis is done infrequently. Thus its application would probably miss most of the genuine cases of gout. In our study fulfilment of six of the 11 criteria for gout from the American College of Rheumatology survey[27] showed a high degree of concordance with the review of medical records,[2] and the incidence rate of gout fulfilling the criteria in our cohort closely agreed with that estimated among male doctors in the Johns Hopkins precursor study (1.5 v 1.7 per 1000 person years).[1] Furthermore,when we evaluated the impact of various definitions for gout our findings were robust and the magnitudes of associations tended to increase with increasing specificity of the case definition.
The restriction to health professionals in our cohort is both a strength and a limitation. The cohort of well educated menminimises the potential for confounding associated with socioeconomic status, and we were able to obtain high quality data with minimal loss to follow-up. Although the absolute rates of gout and distribution of dietary intake may not be representative of a random sample of US men, the biological effects of dietary intake on gout should be similar. Our findings are most directly generalisableto men aged 40 and older (the population with the highest prevalence of gout[62]) with no history of gout. Given the potential influence of female hormones on the risk of gout in women[67] and an increased role of dietary impact on uric acid levels among patients with existing gout,[68] prospective studies of these populations would be valuable.
In conclusion, our findings provide prospective evidence that consumption of sugar sweetened soft drinks and fructose is strongly associated with an increased risk of gout. Furthermore, fructose rich fruits and fruit juices may also increase the risk. In contrast, diet soft drinks were not associated with the risk of gout.
Ethical Approval: This study was approved by the Partners Health Care System institutional review board; return of a completed questionnaire was accepted by the board as implied informed consent.
Sidebar: What is Already Known on this Topic
Sugar sweetened soft drinks contain large amounts of fructose, which is known to increase serum uric acid levels
No studies have investigated the link between these beverages and fructose intake and the risk of gout
Sidebar: What this Study Adds
Consumption of sugar sweetened soft drinks or fructose is associated with an increased risk of gout in men
Diet soft drinks are not associated with the risk of gout in men
Top
From Current Opinion in Rheumatology
Gout
Eliseo Pascual; Teresa Pedraz
Authors and Disclosures
Posted: 05/19/2004; Curr Opin Rheumatol. 2004;16(3) © 2004 Lippincott Williams & Wilkins
Abstract and Epidemiology
Abstract
Purpose of the Review: We have reviewed the latest publications on epidemiology of gout; also there have been new insights into the regulation of the inflammation resulting from the regular interaction occurring between MSU crystals and cells in both asymptomatic and symptomatic gouty joints. Finally we review different publications of clinical interest.
Recent Findings: The incidence of gout has been found to be increasing, and the disease starts at an earlier age; this likely relates to changes in dietary habits that lead to the development of the insulin resistance syndrome to which hyperuricemia, and thus gout, relates. Dietary modifications to correct the insulin resistance syndrome and reduce uricemia by increasing renal clearance of urate have heath consequences that go far beyond their beneficial effect on gout. Monosodium urate crystals and cells interact in the asymptomatic joints of gouty patients. The mechanisms that trigger a gouty attack with this background and those responsible for the self-limitation of gouty attacks are not understood. The degree of maturation of the monocytes-macrophages present in the fluid appears to modulate the consequences of the crystal-cell interaction and gives a hint of how from the crystal-cell interaction may result in such divergent consequences as intense inflammation or the absence of symptoms. Interest in gout treatment continues, as shown by the number of papers on the subject reviewed. In most cases, gout is an easy disease to treat, but we do not have enough information about how to handle those few patients with difficult disease, and what we refer colloquially to as difficult gout has not been properly defined yet.
Summary: Gout incidence and severity appear to be increasing likely in relation to dietary habits. Switching the pattern of secretion of inflammatory mediators with maturating macrophages which contain MSU crystals may be the key to self limitation of gouty attacks. We must define better which gout is a difficult one.
Epidemiology
Using the Rochester Epidemiology Project computerized medical record system, all potential cases of acute gout in the city of Rochester, MN, during the periods 1977 to 1978 and 1995 to 1996 were identified. It was found that there was a greater than twofold increase in the rate of primary gout in the later period when compared with the earlier one (P = 0.002). Of interest, the incidence of secondary diuretic-related gout did not increase (P = 0.140).[1] In a large study from Taiwan,[2] the clinical features of 1079 Chinese patients with gout seen by rheumatologists between 1993 and 2000 were analyzed and compared with earlier series; in one-fourth of the patients from the more recent group, the disease had started before age 30, and in the whole group, the first attack occurred between the third and fifth decades (68.2%) rather than between the fourth and sixth decades, as reported in older series. In addition, the incidence of gout in females had increased (8.0%), and the incidence of tophi was high (16.8%). A third study, also from Taiwan, compared the features of gout in patients diagnosed between 1983 and 1991 with those diagnosed between 1992 and 1999; in the latter group, the age of the patients at the beginning of the disease was 2.7 years younger with high significance, and the percentage of female and familial gout were higher as well. The percentages of obesity, hypertriglyceridemia, and nephrolithiasis were higher, although those of hypertension and high cholesterol levels were lower.[3] These studies from different parts of the world suggest that the incidence and severity of gout may be increasing, and the already well-known association of hyperuricemia and gout with dietary habits and the resulting insulin resistance is a likely cause, as extensively reviewed this past year[4*]; in this setting, hyperuricemia results from poor renal clearance of uric acid, and low-calorie diets result in improvement of the renal clearance of uric acid and consequently a reduction of serum uric acid levels.[5,6] The importance of alcohol intake was again outlined in another study, also from Taiwan, which found that the waist-to-height ratio, which indicates central obesity, has a significant linear effect on gout occurrence, independent of body mass index.[7] Through its association with the insulin resistance syndrome, hyperuricemia and gout are associated with cardiovascular disease and reduced life span; in addition, hyperuricemia has been recognized as an independent risk factor for cardiovascular disease,[8,9] although not all the studies find this independent association of hyperuricemia with cardiovascular disease.[10] Rheumatologists must be aware of the risks associated with gout and hyperuricemia and actively join physicians from other fields[11] in supporting preventive lifestyle measures to help patients with their gout, which have very important additional benefits.
Overactivity of phosphoribosylpyrophosphate synthetase was found in a young woman with renal stones and hyperuricemia[12]; this defect had not been reported in women previously.
Crystals, cCells, and Inflammation
The finding of monosodium urate (MSU) crystals in synovial fluid samples from inflamed joints of patients with gout by McCarty and Hollander[13] represented an essential clue for understanding these attacks. Injection of MSU crystals in healthy human and canine joints reproduced the natural attacks,[14] and pretreatment with colchicine, phenylbutazone, or corticoids prevented them.[15] These observations resulted in the proposal that seeding of MSU crystals into the joint cavity from surrounding joint tissue deposits was the trigger and cause of the attacks of arthritis.[16] Strengthening this view, and extensively reviewed by Terkeltaub,[17] in vitro interaction of MSU crystals and blood monocytes and other cells induce the release of different proinflammatory mediators. This seeding of crystals into the joint cavity as a trigger of a gouty attack model presupposed that the joints are free of crystals in intercritical periods. Subsequent papers, however, showed that MSU crystals were present in asymptomatic knees[18-20] and metatarsophalangeal joints[20-23] and that the presence of crystals was very regular in previously inflamed joints of patients with gout untreated with urate-lowering drugs.[24,25] The synovial fluid of these MSU-containing asymptomatic joints was more cellular than that of normal joints and contained various percentages of polymorphonuclear leukocytes (absent in the absence of crystals).[24] Also in these crystal-containing asymptomatic joints, phagocytosis of crystals was very intense (a mean of approximately one in four cells contained crystals),[26] and both the cellularity and percentage of polymorphonuclear leukocytes decreased after the administration of prophylactic doses of colchicine.[27] As a whole, these data indicate that (1) after being formed in the joint, in the absence of hypouricemic treatment, MSU crystals stay indefinitely in it and can be regularly found in synovial fluid samples and (2) the interaction between cells and crystals in these asymptomatic joints is intense at any time and appears to result in minimal inflammation. Thus, gout can be viewed as a chronic inflammatory disease in which the presence of cells and MSU crystals in the joint and their constant interaction is the base; during long, asymptomatic periods, there is some minimal inflammation in these joints, which becomes intense and symptomatic during the attacks. We still do not know what factors determine the amount of inflammation in these joints, but it is interesting that gouty attacks are self-limited, and the reason for this is not understood, and also that gouty attacks are triggered by events such as acute medical illnesses or surgical procedures or parenteral nutrition as recently reported.[28] Some host factors that may influence the regulation of the intensity of the inflammation in the joints resulting from the MSU crystal and cell interaction are now being revealed. Eight mouse monocytes/macrophage cell lines arranged in increasing order of differentiation were added MSU crystals. Phagocytosis of crystals did not occur in the three least differentiated lines; in the remaining five lines, a correlation between phagocytosis of MSU crystals and the increasing state of macrophage differentiation was noted. More interesting, the three least differentiated cell lines, nonphagocytic for crystals, did not produce tumor necrosis factor ? when exposed to the crystals. Good tumor necrosis factor ? production by cells at an intermediate stage of differentiation was noted, whereas none was triggered by the crystals in the two more mature cell lines despite intense crystal phagocytosis; these two more differentiated cells lines produced tumor necrosis factor ? normally in response to phagocytosis of zymosan, but uptake of MSU crystals by these mature macrophages suppressed in them tumor necrosis factor ? synthesis in response to zymosan.[29] A follow-up study of this work[30**] was published recently; human peripheral blood monocytes were differentiated in vitro from 1 to 7 days. In these cells, the addition of MSU crystals did stimulate tumor necrosis factor ?, interleukin-1?, and interleukin-6 production by monocytes (at day 1), but this production decreased markedly in the more differentiated macrophages, where the production at day 3 was much lower, and nonexistent at day 5 and after. More interesting, the effect of MSU crystals on cells switched through the differentiation process: MSU crystals did not interfere with the zymosan-induced tumor necrosis factor ? secretion by monocytes but significantly inhibited the zymosan-induced tumor necrosis factor ? secretion by macrophages. Finally, supernatants from MSU crystal-treated monocyte cultures induced E selectin expression in human umbilical vein endothelial cells, whereas the supernatants from similarly treated macrophage cultures did not; consequently, MSU-treated monocyte culture supernatants were able to recruit and capture neutrophils and the corresponding macrophage supernatants were not. All these data appear to indicate that more mature crystal-containing macrophages may help to downregulate the joint inflammation in the inflamed gouty joints and perhaps allow the persistent interaction of cells and crystals in the asymptomatic joints during intercritical periods. In this respect, interaction between murine bone marrow-derived macrophages and MSU crystals, among others, has been found to promote macrophage survival and DNA synthesis, the latter response particularly striking in the presence of macrophage colony-stimulating factor[31*]; these MSU crystal-stimulated, more mature surviving macrophages may be more effective in downregulating the crystal-related inflammation. Also supporting the probable importance of mature macrophages in modulating gouty inflammation, mactinin, a substance that promotes monocyte/macrophage maturation, has been found in gouty synovial fluids.[32]
A recombinant retrovirus containing the murine interleukin-10 gene was constructed and murine interleukin-10 was introduced into embryonic fibroblast cells that culture supernatant was found to contain biologically active murine interleukin-10. Injection of the transfected cells into murine air pouches significantly inhibited the cellular infiltration induced by MSU crystals injected 48 hours later. This effect was also produced by injection in the pouch of recombinant human interleukin-10 along with the MSU crystals.[33*] In a similar experiment by the same group,[34] retrovirally transfected prostaglandin D synthase reduced MSU crystal-related cellular infiltration in the murine air pouch, at least partly by inhibiting MIP-2 and interleukin-1?; these studies suggest that interleukin-10 and prostaglandin D synthetase may have a role in the downregulation of the MSU crystal-related inflammation in joints. Finally, also using the same murine model, the chemokines S100A8 and S100A9 were shown to be important in the MSU crystal-related recruitment of neutrophils.[35]
Treatment
An article was published by the British Medical Journal warning against the use of high-dose colchicine as a treatment for acute gout, after discussing three patients in whom severe gastrointestinal side effects ensued. The alternative use of nonsteroidal antiinflammatory drugs, or steroids if there are risks related to them, should be kept in mind.[36*] A previous control study had already shown how common these side effects are when colchicine is used to treat acute gout.[37] Severe side effects related to the use of colchicine in the treatment of gout continue to appear,[38-40] generally in patients at risk.
The local effect of three different commonly used steroid preparations (depot betamethasone phosphate, prednisolone tebutate, and triamcinolone hexacetonide) was examined by means of the mouse air pouch model. The steroid solutions were injected in the air pouches alone and also 24 hours after the injection of MSU crystals. Triamcinolone and prednisolone crystals persisted longer in the pouches than betamethasone. In the MSU crystal-injected pouches, crystal phagocytosis in the fluid was decreased in the betamethasone- (P < 0.01), prednisolone- (P < 0.003), and triamcinolone- (P < 0.006) injected pouches when compared with the MSU crystal-injected pouches alone. Pouches injected with MSU crystals alone showed the most intense tissue inflammation at all times. After MSU, betamethasone-injected pouches had a rapid but mild decrease in the number of lining cells and inflammation. In contrast, triamcinolone- and prednisolone-injected pouches showed a very thin tissue with few or no vessels and almost no inflammation at 7 days. The pouches injected with MSU crystals and any of the corticoid preparations had three times more tophi-like structures and persistent crystals identified than those injected with MSU crystals alone.[41] The clinical significance in human gout of these findings is difficult to determine, but it might be worth recalling that small doses of intraarticular corticosteroids are sufficient to end gouty attacks.[42]
The hypouricemic effect of losartan and fenofibrate,[43,44] which results from increased renal clearance of urate,[45] added to antihyperuricemic therapy and can be a useful adjuvant in the treatment of selected patients. Treatment of hyperuricemia in renal transplant recipients, which results from the use of cyclosporine (which reduces renal clearance of uric acid), has been evaluated. Allopurinol was given to those not receiving azathioprine (100 patients) and benziodarone was given to the remainder of the group to avoid the azathioprine-allopurinol interaction (189 patients). Both drugs were effective for the control of hyperuricemia, but benziodarone caused greater reductions in serum uric acid levels, especially when used at mean doses of more than 75 mg/d. Severe side effects were uncommon in both the allopurinol and benziodarone groups.[46*] The best management of these patients, if they are taking azathioprine, is still an unanswered question. Unfortunately, both benzbromarone and benziodarone have now been withdrawn from the market because of the report of occasional adverse effects on the liver resulting in death; benzbromarone will continue to be restricted but available in some European countries (Spain and France at least) for patients with severe gout refractory to the available urate-lowering therapies. In a case-control study of 100 patients with primary gout and 72 healthy controls, lower renal clearance of uric acid than in controls was found even in patients showing an apparently high 24-hour uric acid output, indicating that relative, low-grade underexcretion of uric acid is a widespread feature in gout[47**]; this study outlines the need for continuing the search for safe and effective uricosuric agents that act by increasing the renal clearance of uric acid.
The renal handling of urate was ascertained in a group of 24 healthy subjects with high normal body lead burden determined after an EDTA mobilization tests, before and after lead chelation therapy: Renal clearance of urate was found to increase after chelation.[48*] The extent to which it is worth determining the lead burden of patients with gout and chelate those exposed in an attempt to increase urate clearance is open to question.
The lower serum uric acid found in patients during attacks of gout, as reported recently again[49] and that cause misdiagnosis, was found to be owing to higher renal clearance of uric acid.[50*]
The progression of the disease in self-medicating patients with gout, a cause of cutaneous tophi in Mexico,[51] was addressed again in a paper from Indonesia.[52] A large study of concomitant septic arthritis and gout was published from China; MSU crystals were detected in all synovial fluids. Subcutaneous tophus rupture with secondary wound infection was the most common route of infection.[53] All these papers remind us that gout is still a very serious heath hazard in communities with lower heath standards.
Finally, the interest in gout continues, as shown by the revisions on the disease published over the past year in leading medical journals.[54,55] In general terms, gout is widely described as a disease that is easy to diagnose and has an effective treatment. It must be emphasized again that clinical diagnosis, although accurate in patients with clearly typical disease, becomes uncertain or is not considered in less typical cases, and thus there is a need to routinely search for MSU crystals in all synovial fluid samples obtained from undiagnosed joints or in material aspirated from consistent lesions.[56] Although treatment with nonsteroidal antiinflammatory drugs tends to be effective in most cases, some patients respond poorly, and the need to have an accurate diagnosis and to use alternative treatments, such as steroids, local or intraarticular, should be kept in mind. Allopurinol is sufficient hypouricemic treatment in most cases, but because of sensitivity to it, poor renal function, too high hyperuricemia (and insufficient effect of allopurinol), or treatment with azathioprine (such as in transplant recipients), some patients with severe gout may be very difficult to manage; the search for better solutions for the management of this group of difficult patients merits our interest.
Top
From Topics in Advanced Practice Nursing eJournal > Articles
Identification and Management of Metabolic Syndrome: The Role of the APN
Douglas H. Sutton, EdD, MSN; Deborah A. Raines, PhD
Posted: 10/03/2007; Topics in Advanced Practice Nursing eJournal. 2007;7(2) © 2007 Medscape
Abstract
Metabolic syndrome now affects approximately 55 million people in the United States.[1] However, metabolic syndrome is not limited to the United States, and now has a global prevalence of approximately 35%.[2] The syndrome is an assemblage of interrelated abnormalities:
Central obesity;
Hypertension;
Dyslipidemia;
Insulin resistance; and
Elevated fibrinogen levels and a prothrombotic state.
All of these factors increase the patients' risk of developing heart disease and type 2 diabetes mellitus. Given the insidious onset of metabolic syndrome, early identification and intervention are critical for reducing the rising mortality rates associated with metabolic syndrome. The advanced practice nurse (APN) plays a critical role in:
Identifying risk factors;
Developing management strategies; and
Educating patients to avoid the onset or worsening of individual risk factors.
Together, these steps taken by the APN help reduce long-term morbidity and mortality associated with this global health calamity. Unfortunately, awareness of this crisis within the midlevel provider community is lacking. One study reported that less than 30% of the clinicians surveyed could name more than 3 risk factors contributing to the development of coronary artery disease.[3] The purpose of this article is to inform the APN of the importance and complexity of the syndrome as a burgeoning health problem facing industrialized societies.
Introduction to Metabolic Syndrome
Metabolic syndrome is a complex and evolving cluster of adverse risk factors that may end in the development of atherosclerotic cardiovascular disease (CVD) and type 2 diabetes mellitus (DM) and their associated morbidities.[2,4] Other names for metabolic syndrome include:
Insulin resistance syndrome;
Syndrome X; and
Dysmetabolic syndrome.
Reaven first described syndrome X as a cluster of diabetes, hypertension, and coronary artery disease with dyslipidemia in 1988.[5] It is believed that the major underlying metabolic abnormality is insulin resistance.[6] Because insulin resistance is closely associated with obesity, particularly with abdominal obesity, the recent escalation of obesity in the United States and other industrialized countries has been accompanied by a parallel increase in the prevalence of metabolic syndrome.[7-9]
Metabolic syndrome is a relatively common, yet potentially devastating, prognosticator for the development of atherosclerotic CVD. The Third Report of the National Cholesterol Education Program Expert Panel (NCEP) on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (ATP III) found that ethnicity influences the prevalence of metabolic syndrome.[10] Mexican Americans now have the highest age-adjusted prevalence of metabolic syndrome for both men and women, and African-American women have a higher incidence of metabolic syndrome than African-American men.[10,11]
The etiology, pathophysiology, signs, symptoms, diagnostic tests, and treatment for metabolic syndrome are examined through a case study developed on the basis of several characteristics of an actual patient. Early recognition and intervention in patients with metabolic syndrome can have a positive impact on outcomes and decrease long-term morbidity.
Definition and Criteria
According to the American Heart Association (AHA) and the National Heart, Lung, and Blood Institute (NHLBI), the interrelated risk factors for metabolic syndrome include[12,13]:
Insulin resistance;
Atherogenic dyslipidemia;
Hypertension;
Obesity (particularly central or abdominal obesity); and
Defects in coagulation, inflammation, and fibrinolysis.
The International Diabetes Federation (IDF) has also defined metabolic syndrome, and although similarities can be found in regard to specific values for most risk factors, the IDF has replaced "central or abdominal obesity" with the more narrowly defined "increased waist circumference.[14]" The joint AHA/NHLBI scientific statement, published September 12, 2005, identifies the dominant underlying risk factors as abdominal obesity and insulin resistance.[2]
The panel of experts who developed the statement reviewed, affirmed, and reinforced that individuals who experience abnormal levels of 3 of 5 criteria should be considered as having metabolic syndrome. The current diagnostic criteria, which are based on the AHA/NHLBI scientific statement, are as follows[2]:
Increased waist circumference (abdominal obesity): men ? 40 in (102 cm), women ? 35 in (88 cm);
Elevated triglycerides (dyslipidemia): 150 mg/dL or higher;
Reduced high-density lipoprotein cholesterol (HDL-C) (dyslipidemia): men < 40 mg/dL, women < 50 mg/dL;
Elevated blood pressure (BP): 130/85 mm Hg or higher; and
Elevated fasting glucose (insulin resistance): 100 mg/dL or higher.
According to the AHA/NHLBI scientific statement, the most widely recognized of the metabolic risk factors include atherogenic dyslipidemia, elevated plasma glucose, and elevated BP.[2] The term "atherogenic dyslipidemia" refers to the grouping of lipoprotein abnormalities that include:
Elevated serum triglyceride levels;
Increased low-density lipoprotein cholesterol (LDL-C); and
Reduced HDL-C.
Pathophysiology
The clinical significance of elevated plasma glucose, particularly in the obese person, may be indicative of insulin resistance. In insulin resistance, tissue has a diminished ability to respond to the action of insulin. In a person with normal metabolism, insulin is released from the beta cells of the islets of Langerhans in the pancreas. The presence of insulin signals insulin-sensitive tissues, including muscle, adipose tissue, and liver cells, to absorb glucose and maintain the circulating blood glucose at a normal level. In an insulin-resistant person, the release of insulin does not trigger the expected insulin response of absorption by muscle, adipose tissue, and liver cells; therefore, the circulating blood glucose levels rise.
To compensate for increased serum glucose levels, the pancreas secretes more insulin. This compensatory mechanism, referred to as hyperinsulinemia, tries to maintain normal glucose levels. Eventually, the beta cells of the pancreas are unable to overcome insulin resistance through hypersecretion of insulin, which results in an elevated serum glucose level. Insulin resistance in fat cells results in hydrolysis of stored triglycerides, which elevates free fatty acids in the blood plasma. Insulin resistance in muscle reduces glucose uptake, whereas insulin resistance in the liver reduces glucose storage, with both effects serving to elevate circulating blood glucose.
Insulin Resistance: Association With Hypertension and Dyslipidemia
Approximately 50% of patients with hypertension have also been found to be insulin-resistant.[15] Exactly how insulin resistance influences BP remains unclear, however. In many previously normotensive individuals, elevated serum glucose levels seem to precede the development of essential hypertension.[16-18] In addition to developing essential hypertension and glucose intolerance, these insulin-resistant patients tend to also develop elevated plasma triglyceride levels and low HDL-C. All of these findings are consistent with the diagnosis of metabolic syndrome.
Inflammation
Cytokines and obesity. Metabolic syndrome has also been associated with a state of chronic, low-grade inflammation.[19,20] Inflammatory cytokines provoke insulin resistance in both adipose tissue and muscle.[20,21] Cytokines are nonantibody proteins secreted by inflammatory leukocytes considered to be key modulators of inflammation.[22] In the obese individual, adipose tissue produces excess cytokines and is believed to exacerbate this syndrome.
Specifically, elevated insulin and glucose concentrations are associated with hypertrophied subcutaneous fat cells or adipocytes. Adipocytes can secrete signaling messengers called adipokines and act as an endocrine cell affecting other tissue and physiologic functions. The presence of adipokine triggers the release of[21]:
Cytokines;
Resistin;
Adiponectin;
Leptin;
Tumor necrosis factor (TNF); and
Plasminogen activator inhibitor (PAI)-1.
These peptide hormones have been implicated in insulin sensitivity and energy homeostasis, thereby exacerbating metabolic syndrome.
Prothrombotic factors: proinflammatory state. A growing body of research now implicates high circulating levels of prothrombotic factors and the presence of a proinflammatory state as being indicative of an even higher risk for acute cardiovascular syndromes.[23-27] The use of the high-sensitivity C-reactive protein (hsCRP) serum test, as a marker of low-grade vascular inflammation, is among the most promising recent risk assessment developments for both atherosclerotic CVD and metabolic syndrome.
Currently, the AHA recommends the use of hsCRP as an adjunct to traditional risk factor screening in individuals at intermediate risk, as identified by Framingham scoring, that is, those whose 10-year risk for coronary heart disease is in the range of 10% to 20%.[26] The AHA endorsed the test as the only inflammatory biomarker currently available with "adequate standardization" and "predictive value" to substantiate use in the outpatient clinical setting.[26]
On the basis of prior studies, levels of hsCRP < 1, 1-3, and > 3 mg/L have been defined as lower, moderate, and higher cardiovascular risk groupings, respectively.[27] Given the relatively low cost of this test, clinicians might consider this at the same time as lipid screening.
The AHA/NHLBI scientific statement lists several risk factors for the development of metabolic syndrome. Many of these are outlined in Table 1 as well as their clinical relevance.[2]
Diagnosis
From a clinical perspective, the diagnosis of metabolic syndrome identifies a patient at increased risk for atherosclerotic CVD and/or type 2 DM. In an effort to introduce the syndrome into clinical practice, the AHA/NHLBI has attempted to formulate simple diagnostic criteria, and to avoid the emphasis of the development of the syndrome on a single cause.
It should be noted that some individuals or ethnic groups (for example, Asians) will develop characteristics of insulin resistance and metabolic syndrome with only moderate increases in waist circumference (that is, beginning at 37 in or 94 cm in men or 32 in or 80 cm in women).[2] Regardless of ethnicity or sex, once an individual exhibits 3 of the 5 AHA/NHLBI diagnostic criteria, they are considered to have metabolic syndrome.
Goals of Clinical Management
For individuals diagnosed as having metabolic syndrome, first-line therapy is directed toward prevention and identification of major risk factors -- in other words, to manage[2]: atherogenic dyslipidemia, hypertension, and impaired glucose regulation.
Prevention of type 2 DM is another important goal for those individuals who do not yet have the disease because of the higher risk associated with type 2 DM and the development of atherosclerotic CVD. The emphasis for the clinician is to mitigate those risk factors that can be modified through therapeutic lifestyle changes (TLCs): obesity, physical inactivity, and atherogenic diet.
TLCs include:
Weight control;
Increased physical activity;
Alcohol moderation;
Sodium restriction; and
Emphasis on increased consumption of fresh fruits, vegetables, and low-fat dairy products.
TLCs positively affect each of the metabolic syndrome risk factors.[2]
Drug therapy remains a consideration for those individuals whose relative risk remains high in the presence of hypertension, dyslipidemia, or impaired glucose regulation. In addition, clinicians should remain ever attentive to bring about smoking cessation in any cigarette smokers.
The recommendations for clinical management are based largely on existing NHLBI, AHA, and the American Diabetes Association (ADA) guidelines for the management of specific risk factors. Lifestyle risk factor reduction focuses on long-term prevention of CVD and type 2 DM, whereas metabolic risk factor reduction focuses on shorter-term prevention of CVD and type 2 DM.
The therapeutic goals and recommendations related to lifestyle risk factor reduction, as presented in the AHA/NHLBI scientific statement, are as follows:
Balance activity and caloric intake to reduce baseline weight by 7% to 10% in the first year and, ultimately, to achieve an ideal body mass index (BMI) of less than 25 kg/m2;
Initiate regular, moderate-intensity physical activity of at least 30 minutes every day (desired), but at least 5 days per week; duration and intensity are based on the individual's relative risk;
Most dietary fat intake should be unsaturated; and
Dietary intake of simple sugars should be limited.
The therapeutic goals and recommendations related to metabolic risk factor reduction are presented in the AHA/NHLBI scientific statement, and are summarized in Table 2 .
----------------------------------------------------------------------------
Table 2. Metabolic Risk Factor Goals and Recommendations
Therapeutic Target & Therapeutic Recommendations
----------------------------------------------------------------------------
Primary: LDL-C
< 100: optimal
100-129: near optimal
130-159: borderline high
160-189: high
? 190: very high
The LDL-C goal is based on the relative risk of the individual client. The greater the risk, the lower the goal. The presence of clinical atherosclerotic CVD confers high risk and includes (1) clinical CHD, (2) symptomatic carotid artery disease, (3) peripheral arterial disease, (4) abdominal aortic aneurysm, and (5) type 2 DM.[32] Other major risk factors that modify LDL goals include (1) cigarette smoking, (2) hypertension, (3) low HDL-C, (4) family history of premature CHD, and (5) age.[28]
----------------------------------------------------------------------------
Secondary: triglycerides (TG)
< 150: normal
150-199: borderline high
200-499: high
? 500: very high
The primary aim of therapy is to reach LDL-C goal. TLCs focused on weight management and increasing physical activity should be emphasized. If TGs are ? 200 mg/dL after LDL-C goal is reached, set secondary goal for non-HDL-C (total HDL-C) 30 mg/dL higher than LDL-C goal.[33]
----------------------------------------------------------------------------
Tertiary: HDL-C target
> 40 in men
> 50 in women
No specific goal has been identified, but instead the client should maximize TLCs to raise the HDL-C as much as possible. The focus of therapy remains on achieving LDL-C goal level for the relative risk assigned to the client.[33]
----------------------------------------------------------------------------
Elevated BP
Reduce BP to < 140/90 mm Hg (or < 130/80 mm Hg if diabetes present)
Reduce BP further to extent possible through TLCs
For BP ? 120/80 mm Hg: Initiate or maintain TLCs in all clients with metabolic syndrome.
For BP ? 140/90 mm Hg (or ? 130/80 mm Hg for individuals with chronic kidney disease or diabetes): Add BP medications as needed and tolerated to achieve goal BP.[2]
----------------------------------------------------------------------------
Elevated Glucose
For IFG, delay progression to type 2 DM
For diabetes, hemoglobin A1c (HbA1C) < 7.0%
For IFG: Encourage weight reduction and increased physical activity.[2]
For type 2 DM: Encourage TLCs and pharmacotherapy as necessary to achieve near-normal HbA1C (< 7%).[2]
----------------------------------------------------------------------------
Prothrombic state
Reduce thrombotic and fibrinolytic risk factors
High-risk: Initiate and continue low-dose aspirin therapy; in patients with atherosclerotic CVD, consider clopidogrel if aspirin is contraindicated. For moderately high-risk patients, consider low-dose aspirin prophylaxis.[2]
----------------------------------------------------------------------------
Proinflammatory state No specific therapies recommended beyond TLCs[2]
----------------------------------------------------------------------------
Notes:
LDL-C = low-density lipoprotein cholesterol; CVD = cardiovascular disease; CHD = coronary heart disease; DM = diabetes mellitus; HDL-C = high-density lipoprotein cholesterol; TLCs = therapeutic lifestyle changes; BP = blood pressure; IFG = impaired fasting glucose; HbA1C = glycosylated hemoglobin
----------------------------------------------------------------------------
----------------------------------------------------------------------------
----------------------------------------------------------------------------
Case Study
Presentation and History
Robert, a 46-year-old man, presents for a routine physical examination for his new job. He denies any complaints at present, and states that he generally feels "pretty good." He denies any recent history of illness or injury.
Robert is a married, newly employed salesman with 2 grown children. He states that he has experienced good health except for a "few aches and pains every now and then." He does not have a primary healthcare provider, as he has just recently been able to obtain health insurance. His last physical examination was over 10 years ago for a job-related injury to his knee. Robert has no allergies, takes no prescription medications, and is able to perform all activities of daily living. He takes acetaminophen occasionally for his "aches and pains."
Family history is significant for his mother and brother having heart disease, hypertension, and obesity.
His mother has had 2 myocardial infarctions (MIs), and his older brother takes oral medication for type 2 DM.
Social history is significant for:
High-fat, high-cholesterol diet that he attributes to his frequent travels associated with his job;
Sedentary lifestyle that he also attributes to his traveling; and
Moderate alcohol use.
Robert denies the use of tobacco products or illegal drug use.
Physical Examination
Robert, who appears his stated age of 46, is a moderately obese, white man. Vital signs include:
Temperature, 98.8°F (37.1°C);
Heart rate, 88 beats per minute;
Respirations, 16 breaths per minute;
Average BP reading of 144/90 mm Hg in both arms (repeated measurements confirmed after 5 minutes of rest between readings);
Weight, 237 lb (107.7 kg);
Height, 68 in (173 cm); and
Calculated BMI, 36 kg/m2.
Robert seems to carry a significant portion of his excess weight in his abdominal area (central obesity). Waist circumference measurement is noted to be 44 in (112 cm). The rest of his physical examination is essentially unremarkable. The working diagnoses of obesity and hypertension are attributed to Robert's history and physical examination, and warrant the following diagnostic and teaching plans. Robert was rescheduled for a second office visit in 2 weeks to discuss the results of his diagnostic work-up.
Diagnostic Plan
The following diagnostic tests are recommended on the basis of the "Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7)" guidelines[34]:
Electrocardiogram;
Urinalysis;
Fasting blood glucose;
Hematocrit;
Serum potassium;
Creatinine (or the corresponding estimated glomerular filtration rate);
Calcium; and
Lipid profile (after a 9- to 12-hour fast) that includes HDL-C, LDL-C, and triglycerides.
Because of Robert's significant family history and current working diagnoses, the hsCRP test would also be recommended for Robert at this time.
Teaching Plan
A follow-up visit is scheduled and teaching is initiated in regard to TLC, specifically diet and physical activity. According to the JNC 7 guidelines, the adoption of healthy lifestyles by all individuals is an indispensable part of the management of those with hypertension, as well as those with obesity.[34] The major lifestyle modifications for Robert include[35,36]:
Weight reduction;
Adoption of the Dietary Approaches to Stop Hypertension (DASH) eating plan, including fruits, vegetables, low-fat dairy, whole grains, poultry, fish, and nuts; and minimal amounts of fats, red meat, sweets, and sugar-containing beverages
Reduction in dietary sodium intake;
Increased physical activity; and
Moderation of alcohol consumption.
Follow-up Visit
Because metabolic syndrome can represent numerous abnormalities, the differential diagnoses can be broad. For Robert, after a comprehensive history and physical examination, his working diagnoses include obesity, stage I hypertension, and metabolic syndrome.
On the basis of his age and family history, the clinician might also consider type 2 DM and atherosclerotic CVD. Because of the insidious nature of each of these disorders, Robert provides no chief complaint and believes himself to be healthy. He is unaware of the potential risk that he has for experiencing an MI and coronary death.
Upon arriving at the clinic for his 2-week follow-up visit, Robert reported making some lifestyle changes. His weight was 233 lb (105.9 kg), down 4 lb (approximately 2 kg) from his initial visit, and his adjusted BMI was 35.4 kg/m2. BP readings in both arms average 145/86 mm Hg. He attributes the change in his weight to his adherence to the prescribed dietary plan, as well as to his daily effort to walk a minimum of 20-30 minutes. Since his initial visit 2 weeks ago, Robert reports only 1 occasion of alcohol use, and reaffirms that he does not smoke. In addition, he is making efforts to include his family and colleagues in his new lifestyle changes.
A review of the various diagnostic assessment results is presented in Table 3 .
On the basis of the history and physical examination, as well as diagnostic testing, Robert meets the AHA/NHLBI criteria for the diagnosis of metabolic syndrome. Of the 5 diagnostic criteria used to screen for metabolic syndrome, Robert has abnormal results in all 5 areas. In addition to the AHA/NHLBI criteria, Robert has a 16% coronary heart disease risk projection according to the Framingham criteria.[37]
For the clinician, the primary purpose of recognizing this clustering of diagnostic criteria into a formal diagnosis of metabolic syndrome is to establish treatment goals for a multitude of associated risk factors, namely, hypertension, hyperlipidemia, and hyperinsulinemia -- all of which result in increased cerebrovascular and cardiac morbidity and mortality in these patients.
Plan of Care
According to the AHA/NHLBI scientific statement, the primary goal of clinical management should focus on the reduction of risk factors that are known to lead to the development of atherosclerotic CVD and type 2 DM in patients who have yet to develop clinical diabetes but are considered to be at higher risk.[2] In light of the current recommendations from the AHA/NHLBI, and in consultation with Robert, the 2 primary interventions that have been agreed upon will focus on TLCs and pharmacologic treatment. In addition to the treatment plan, Robert must recognize the need for continued assessment and management of risk factors associated with his diagnosis of metabolic syndrome.
TLCs
The first and most preferred intervention, as it relates to improvement in outcomes and long-term reduction of risks associated with atherosclerotic CVD and type 2 DM, will be TLCs. As was discussed at his initial visit, Robert will focus on diet, exercise, and other social and environmental factors that he can change in order to reduce the inherent risks associated with his diagnosis. The initial plan developed for Robert, using the principles of the DASH eating plan, includes a weight reduction goal of 7% to 10%, or approximately 20-24 lb (11 kg) over the next 6-12 months.[35,36]
Robert also agreed to:
Initiation of a walking program of approximately 30 minutes duration each day;
Achieve a total of at least 180 minutes per week of exercise; and
Reduce or eliminate alcohol from his diet.
Exercise leads to a reduction in body fat, particularly abdominal fat. The HEalth, RIsk factors, exercise Training, And GEnetics (HERITAGE) study demonstrated the utility of physical activity in improving a cluster of cardiovascular and metabolic risk factors simultaneously.[38]
Pharmacologic Considerations
According to the NCEP guidelines, the primary goal of pharmacologic therapy for Robert is to reduce his LDL-C to below 100 mg/dL.[33] Statins are considered to be the most effective pharmacologic agents to reduce LDL-C; on average, statins lower LDL-C by 18% to 55%.[29] In addition, statins:
Reduce cellular inflammation;
Lower C-reactive protein levels; and
Improve the antioxidative properties of LDL-C.
Fibrates are primarily effective in lowering triglycerides (by about 40%) and -- to a lesser extent -- reduce LDL-C (10% to 15%) and increase HDL-C (15% to 20%). Combination therapy with a fibrate and a statin is potentially useful for patients with atherogenic lipid profiles, for which fenofibrate appears to be a more appropriate choice due to less myopathic potential.
Alternatively, the clinician may consider adding a cholesterol absorption inhibitor, such as ezetimibe (Zetia). This reduces LDL-C levels by an additional 25% within approximately 2 weeks of combination therapy.[39] Due to the complexity of Robert's dyslipidemia (elevated LDL, low HDL, and high triglycerides), combination therapy may be required if Robert is to achieve the NCEP target goals.[40] It should also be noted that hyperglycemia and hypertriglyceridemia are related; improving hyperglycemia, therefore, can significantly lower triglyceride levels.[41]
A clear association has been established between diabetes and microvascular and macrovascular disease; thus, it would seem important to monitor for the development of type 2 DM in Robert and effectively treat his underlying insulin resistance. None of the marketed antidiabetic agents are currently approved for the prevention of type 2 DM; however, metformin has been studied and found to reduce insulin resistance in the Diabetes Prevention Program Trial.[42] From this same trial, it was found that patients randomized to diet and exercise reduced their risk for progression to type 2 DM by 58% compared with placebo, whereas those who received metformin reduced their risk by 31% compared with placebo.[34]
Robert has also been diagnosed with stage I hypertension (defined as systolic blood pressure [SBP], 140-159 mm Hg or diastolic blood pressure [DBP], 90-99 mm Hg); therefore, an antihypertensive medication should be considered as well. Because of the elevated serum glucose levels, which may be a precursor to the development of type 2 DM, the recommended drug choice for Robert is an angiotensin-converting enzyme (ACE) inhibitor (or angiotensin receptor blocker [ARB] in patients who cannot tolerate ACE inhibitors). The treatment goal for Robert is to achieve a target BP of < 130/80 mm Hg in order to prevent progressive nephropathy, MI, and stroke.
It should be noted that in the presence of diagnosed type 2 DM, many patients require combinations of 3-5 agents to achieve BP target goals.[40]
Initially, conservative pharmacologic therapy was implemented for Robert. He was placed on a low-dose ACE inhibitor to control his stage I hypertension, as well as atorvastatin (Lipitor) to reduce his LDL-C and triglyceride levels. It was discussed that his adherence to the TLC plan was critical in achieving an improvement in his HDL-C level, as well as reducing his hyperglycemia. Robert was scheduled for a return visit in 4 weeks to evaluate the effectiveness of his treatment plan. The need for frequent reassessment and follow-up care was discussed with Robert, who agreed that this approach was necessary to achieve an optimal clinical outcome.
Summary
APNs need to include screening for metabolic syndrome and a discussion of health-promoting behaviors as part of routine assessment of all patients. Early intervention is the key to limiting long-term morbidity due to metabolic syndrome. Robert represents a typical case of a patient with delayed entry into the healthcare system, and predisposing genetic and environmental risk factors but not symptoms interfering with his current activities of daily living or self-perceived indicators of overall health. Empowering him as an individual and providing knowledge that may serve to motivate him to make lifestyle changes is a critical intervention.
Although TLCs, coupled with a pharmacologic plan, may prove sufficient to control and ultimately reverse Robert's diagnosis of metabolic syndrome, compliance with major lifestyle change has been reported to be low, and most patients fail to achieve target goals.[26,43] This finding heightens the need for early intervention by the APN as well as frequent reassessment and modification of the treatment plan. Managing the complexities of a patient with metabolic syndrome is a multidimensional challenge; APNs must recognize the importance of their role in slowing or stopping the progression of future debilitating disease.
Top
From U.S. Pharmacist
Vitamin D Supplementation: An Update
Christine Gonzalez, PharmD, CHHC
Authors and Disclosures
Posted: 11/11/2010; US Pharmacist © 2010 Jobson Publishing
Abstract and Introduction
Introduction
An estimated 1 billion people worldwide, across all ethnicities and age groups, have a vitamin D deficiency.[1–3] This is mostly attributable to people getting less sun exposure because of climate, lifestyle, and concerns about skin cancer. The 1997 Dietary Reference Intake (DRI) values for vitamin D, initially established to prevent rickets and osteomalacia, are considered too low by many experts.[4] DRI values are 200 IU for infants, children, adults up to age 50 years, and pregnant and lactating women; 400 IU for adults aged 50 to 70 years; and 600 IU for adults older than 70 years. Current studies suggest that we may need more vitamin D than presently recommended to prevent chronic disease. Emerging research supports the possible role of vitamin D in protecting against cancer, heart disease, fractures and falls, autoimmune diseases, influenza, type 2 diabetes, and depression. Many health care providers have increased their recommendations for vitamin D supplementation to at least 1,000 IU.[5] As a result, more patients are asking their pharmacists about supplementing with vitamin D.
Pharmacology
Vitamin D is a fat-soluble vitamin that acts as a steroid hormone. The body makes vitamin D from cholesterol through a process triggered by the action of the sun's ultraviolet B rays on the skin (FIGURE 1). Factors such as skin color, age, amount and time of sun exposure, and geographic location affect how much vitamin D the body makes. Vitamin D influences the bones, intestines, immune and cardiovascular systems, pancreas, muscles, brain, and the control of cell cycles.[6] Its primary functions are to maintain normal blood concentrations of calcium and phosphorus and to support bone health.
Click to zoom
(Enlarge Image)
Figure 1.
Vitamin D synthesis.
UVB: ultraviolet B.
Source: Reference 32.
Vitamin D undergoes two hydroxylations in the body for activation. There are several metabolic products or modified versions of vitamin D (TABLE 1). Calcitriol (1,25-dihydroxyvitamin D3), the active form of vitamin D, has a half-life of about 15 hours, while calcidiol (25-hydroxyvitamin D3) has a half-life of about 15 days.[6] Vitamin D binds to receptors located throughout the body.
Deficiency, Blood Concentrations, and Toxicity
Risk factors for vitamin D deficiency include living in northern latitudes (in the U.S., above the line from San Francisco to Philadelphia), failing to get at least 15 minutes of direct sun exposure daily, being African American or dark-skinned, being elderly, or being overweight or obese.[5] Rickets and osteomalacia are the well-known diseases of severe vitamin D deficiency. Musculoskeletal pain and periodontal disease may also indicate a significant vitamin D deficiency.[7] Subtle symptoms of milder deficiency include loss of appetite, diarrhea, insomnia, vision problems, and a burning sensation in the mouth and throat.[7] Drawing a blood calcidiol concentration is the standard test for vitamin D status, since calcidiol has a longer half-life.[8]
A normal range of vitamin D is 30 to 74 ng/mL, but this can vary among laboratories.[8] Most experts agree that a concentration between 35 and 40 ng/mL is reasonable for preventive health. Some suggest that the optimal concentration for protecting against cancer and heart disease is between 50 and 70 ng/mL and up to 100 ng/mL. Side effects or toxicity can occur when blood concentrations reach 88 ng/mL or greater.[9] Symptoms include nausea, vomiting, constipation, headache, sleepiness, and weakness.[6] Too much vitamin D can raise blood calcium concentrations, and acute toxicity causes hypercalcemia and hypercalciuria.[6,9]
Disease Prevention
Cancer
Vitamin D decreases cell proliferation and increases cell differentiation, stops the growth of new blood vessels, and has significant anti-inflammatory effects. Many studies have suggested a link between low vitamin D levels and an increased risk of cancer, with the strongest evidence for colorectal cancer. A Creighton University study found that postmenopausal women given 1,100 IU of vitamin D3 (plus calcium) versus placebo were 77% less likely to be diagnosed with cancer over the next 4 years.[10] In the Health Professionals Follow-up Study (HPFS), subjects with high vitamin D concentrations were half as likely to be diagnosed with colon cancer as those with low concentrations.[11]
Some studies have shown less positive results, however. The Women's Health Initiative found that women taking 400 IU of vitamin D3 (plus calcium) versus placebo did not have a lower risk of breast cancer.[12] Many critics have argued that this dosage of vitamin D is too low to prevent cancer. A 2006 Finnish study of male smokers found that those with higher vitamin D concentrations had a threefold increased risk for pancreatic cancer, with cigarette smoking not found to be a confounding factor.[13] A 2009 U.S. study of men and women (mostly nonsmokers) did not confirm these results, finding no association between vitamin D concentrations and pancreatic cancer overall, except in subjects with low sun exposure.[14] In this subgroup, higher versus lower vitamin D concentrations had a positive association with pancreatic cancer.[14] A definitive conclusion cannot yet be made about the association between vitamin D concentration and cancer risk, but results from many studies are promising.
Heart Disease
Several studies are providing evidence that the protective effect of vitamin D on the heart could be via the renin-angiotensin hormone system, through the suppression of inflammation, or directly on the cells of the heart and blood-vessel walls. In the Framingham Heart Study, patients with low vitamin D concentrations (<15 ng/mL) had a 60% higher risk of heart disease than those with higher concentrations.[15] The HPFS found that subjects with low vitamin D concentrations (<15 ng/mL) were two times more likely to have a heart attack than those with high concentrations (>30 ng/mL).[16] In another study, which followed men and women for 4 years, patients with low vitamin D concentrations (<15 ng/mL) were three times more likely to be diagnosed with hypertension than those with high concentrations (>30 ng/mL).[17] As is the case with cancer and vitamin D, more studies are needed to determine the role of vitamin D in preventing heart disease, but the evidence thus far is positive.
Fractures and Falls
Vitamin D is known to help the body absorb calcium, and it plays a role in bone health. Also, vitamin D receptors are located on the fast-twitch muscle fibers, which are the first to respond in a fall.[18] It is theorized that vitamin D may increase muscle strength, thereby preventing falls.[5] Many studies have shown an association between low vitamin D concentrations and an increased risk of fractures and falls in older adults.
A combined analysis of 12 fracture-prevention trials found that supplementation with about 800 IU of vitamin D per day reduced hip and nonspinal fractures by about 20%, and that supplementation with about 400 IU per day showed no benefit.[19] Researchers at the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts University have examined the best trials of vitamin D versus placebo for falls. Their conclusion is that "fall risk reduction begins at 700 IU and increases progressively with higher doses."[18] Overall, the evidence is strong in support of supplementing with vitamin D to prevent fractures and falls.
Autoimmune Diseases and Influenza
Since vitamin D has a role in regulating the immune system and a strong anti-inflammatory effect, it has been theorized that vitamin D deficiency could contribute to autoimmune diseases such as multiple sclerosis (MS), type 1 diabetes, rheumatoid arthritis, and autoimmune thyroid disease. Scientists have suggested that vitamin D deficiency in the winter months may be the seasonal stimulus that triggers influenza outbreaks in the winter.[20] Numerous trials have evaluated the association between vitamin D and immune-system diseases.
A prospective study of white subjects found that those with the highest vitamin D concentrations had a 62% lower risk of developing MS versus those with the lowest concentrations.[21] A Finnish study that followed children from birth noted that those given vitamin D supplements during infancy had a nearly 90% lower risk of developing type 1 diabetes compared with children who did not receive supplements.22 In a Japanese randomized, controlled trial, children given a daily vitamin D supplement of 1,200 IU had a 40% lower rate of influenza type A compared with those given placebo; there was no significant difference in rates of influenza type B.[23] More studies of the influence of vitamin D on immunity will be emerging, as this is an area of great interest and it remains unclear whether there is a link.
Type 2 Diabetes and Depression
Some studies have shown that vitamin D may lower the risk of type 2 diabetes, but few studies have examined the effect of vitamin D on depression. A trial of nondiabetic patients aged 65 years and older found that those who received 700 IU of vitamin D (plus calcium) had a smaller rise in fasting plasma glucose over 3 years versus those who received placebo.[24] A Norwegian trial of overweight subjects showed that those receiving a high dose of vitamin D (20,000 or 40,000 IU weekly) had a significant improvement in depressive symptom scale scores after 1 year versus those receiving placebo.[25] These results need to be replicated in order to determine a correlation between vitamin D and the risk of diabetes or depression.
Dosing
Only a few foods are a good source of vitamin D. These include fortified dairy products and breakfast cereals, fatty fish, beef liver, and egg yolks. Besides increasing sun exposure, the best way to get additional vitamin D is through supplementation. Traditional multivitamins contain about 400 IU of vitamin D, but many multivitamins now contain 800 to 1,000 IU. A variety of options are available for individual vitamin D supplements, including capsules, chewable tablets, liquids, and drops. Cod liver oil is a good source of vitamin D, but in large doses there is a risk of vitamin A toxicity.[26]
The two forms of vitamin D used in supplements are D2 (ergocalciferol) and D3 (cholecalciferol). D3 is the preferred form, as it is chemically similar to the form of vitamin D produced by the body and is more effective than D2 at raising the blood concentration of vitamin D.[27] Since vitamin D is fat soluble, it should be taken with a snack or meal containing fat. In general, 100 IU of vitamin D daily can raise blood concentrations 1 ng/mL after 2 to 3 months (TABLE 2).[28] Once the desired blood concentration is achieved, most people can maintain it with 800 to 1,000 IU of vitamin D daily.[28] Even though dosages up to 10,000 IU daily do not cause toxicity, it generally is not recommended to take more than 2,000 IU daily in supplement form without the advice of a health care provider.[29] Individuals at high risk for deficiency should have a vitamin D blood test first; a dosage of up to 3,000 to 4,000 IU may be required to restore blood concentrations.[29]
Drug Interactions
Vitamin D supplements may interact with several types of medications. Corticosteroids can reduce calcium absorption, which results in impaired vitamin D metabolism.[6] Since vitamin D is fat soluble, orlistat and cholestyramine can reduce its absorption and should be taken several hours apart from it.[6] Phenobarbital and phenytoin increase the hepatic metabolism of vitamin D to inactive compounds and decrease calcium absorption, which also impairs vitamin D metabolism.[6]
Future Research
While considerable research supports the importance of vitamin D beyond bone health, further trials are required before broad claims can be made about vitamin D and prevention of chronic disease. The Institute of Medicine (IOM) is reviewing the research on vitamin D and plans to report in late 2010 regarding any updates to the DRIs for vitamin D (and calcium).[30] Specifically, the IOM will consider the relation of vitamin D to cancer, bone health, and other chronic diseases.[30] An important study, the Vitamin D and Omega-3 Trial, was launched in early 2010 to determine whether 2,000 IU of vitamin D3 and 1,000 mg of EPA (eicosopentaenoic acid) plus DHA (docosahexaenoic acid) daily can lower the risk of cancer, heart disease, stroke, and other diseases.[31] This randomized trial, which will enroll about 20,000 healthy men and women, should provide more insight on vitamin D supplementation.
Conclusion
As the number of people with vitamin D deficiency continues to increase, the importance of this hormone in overall health and the prevention of chronic diseases is at the forefront of research. The best evidence for the possible role of vitamin D in protecting against cancer comes from colorectal cancer studies. Evidence also is strong for the potential role of vitamin D in preventing fractures and falls. At this time, further studies are needed to evaluate the role of vitamin D in protecting against heart disease, autoimmune diseases, influenza, diabetes, and depression.
Top
From Medscape Rheumatology > Ask the Experts > Rheumatoid Arthritis and Related Conditions
Current Treatment Recommendations for Acute and Chronic Gout
Robert Terkeltaub, MD
Posted: 05/08/2002
Question
What are the most current recommendations for treatment of acute and chronic gout? Please also include the latest preventive and dietary recommendations.
Response from Robert Terkeltaub, MD
Some of the standard treatment recommendations for acute and chronic gout have undergone certain revisions in recent years.[1,2] For example, intravenous (IV) colchicine use is no longer advocated for treatment of acute gout.[1] Adrenocorticotropic hormone (ACTH) is now recommended as the treatment of choice for acute gout in many clinical circumstances. In the past, colchicine would have been used as first-line therapy.[1,3]
The advent of cyclooxygenase-2 (COX-2)-selective nonsteroidal anti-inflammatory drugs (NSAIDs) also brings some potentially interesting new options for treatment of acute gout to the table. However, data from well-controlled studies of COX-2-selective NSAID use in patients with acute gout are not yet available.
The treatment of hyperuricemia also continues to be refined. One example is the newly recognized uric-acid-lowering activity of losartan, an angiotensin II receptor antagonist. In a study of hypertensive renal transplantation patients being treated with cyclosporin A, administration of 50 mg of losartan daily resulted in a 17% increase in the fractional excretion of uric acid and an 8% decrease in plasma uric acid.[4]
In addition, the options for uric-acid-lowering therapy for allopurinol-allergic patients unable to take uricosuric drugs also have expanded. Well-described protocols for oral allopurinol desensitization and for the use of oxypurinol are successful in approximately 50% of patients with minor forms of allopurinol hypersensitivity.[5] It is also possible that the use of recombinant uricase could prove valuable in the treatment of patients with coexisting substantial renal failure, an allopurinol allergy, or marked tophaceous gout. But again, definitive results in these populations will require broader, long-term clinical studies.
Finally, the association of Syndrome X with hyperuricemia and gout has been better recognized and characterized in recent years. In a pilot study,[6] a therapeutic diet aimed at the insulin resistance central to Syndrome X has been observed to have substantial uric-acid-lowering effects. Dietary recommendations included a calorie restriction of 1600 kcal per day (40% carbohydrate, 30% protein, 30% fat), replacement of refined carbohydrates with complex carbohydrates, and replacement of saturated fats with mono- and polyunsaturated fats. Findings of reduced serum uric acid and improvement in dyslipidemia suggest that weight loss, in association with a change in macronutrient proportions, could be helpful in the management of chronic gout.
References
Top
From WebMD Health News
Gout Drug May Lower Blood Pressure
from WebMD — a health information Web site for patients
Salynn Boyles
September 24, 2009 — A new study suggests a direct link between a high-sugar diet and high blood pressure, and researchers say the finding may lead to a novel way to treat hypertension.
Middle-aged men who took part in the study showed significant increases in blood pressure after eating a high-sugar diet for just two weeks unless they took the drug allopurinol, used to treat the painful inflammatory condition known as gout.
Gout is caused by the buildup of uric acid in the blood. Excessive alcohol and organ meat consumption are known to cause gout. The sugar fructose has also been shown to raise uric acid levels.
Researcher Richard Johnson, MD, and colleagues first showed that allopurinol could lower high blood pressure by lowering uric acid levels in a small study involving hypertensive preteens and teens reported last year.
Their newly published research showed the same thing in adult men, but Johnson tells WebMD that more study is needed to confirm the findings.
The research was presented in Chicago at the American Heart Association's 63rd High Blood Pressure Research Conference.
"This is the first direct study to suggest that fructose can raise blood pressure and that it is mediated by uric acid, but it is a pilot study," he says. "Allopurinol does have rare, but potentially serious, side effects. Clearly, we need more research before this drug can be recommended to lower blood pressure."
Fructose, Uric Acid, and Blood Pressure
The study included 74 middle-age men whose average age was 51. All of the men ate 200 grams (800 calories) of fructose every day for two weeks in addition to their regular diets.
To put this in perspective, a recent national health survey suggests that added sugars account for about 400 calories consumed by the average American each day.
Almost all of the sugars and syrups used to sweeten processed foods contain roughly equal amounts of fructose and another sugar, glucose.
Table sugar is made up of about 50% fructose and 50% glucose, while high-fructose corn syrup is 55% fructose and 45% glucose.
All the men in the study ate the high-fructose diets, but half also took the gout drug.
After two weeks on the high-sugar diet, the men who took the drug showed significant declines in uric acid levels and no significant increase in blood pressure.
In contrast, men who did not take the drug had increases of about 6 points in systolic blood pressure (the top blood pressure number) and 3 points in diastolic blood pressure (the bottom blood pressure number).
American Heart Association Recommendations
While it is too soon to recommend taking uric acid-lowering drugs to lower blood pressure, it is clear that too much sugar in the diet can hurt the heart, Johnson says.
The American Heart Association reached the same conclusion in guidelines published last month.
The group recommends that:
Women eat no more than 25 grams (100 calories) of added sugar per day, which is equivalent to about six teaspoons.
Men should eat no more than 37.5 grams (150 calories) of added sugar, which is equivalent to nine teaspoons.
Foods high in added sugars should not take the place of foods that contain essential nutrients.
"Sugar has no nutritional value other than to provide calories," University of Vermont professor of nutrition Rachel K. Johnson, PhD, MPH, notes in a written statement.
She adds that soft drinks and other sugar-sweetened beverages are the No.1 source of added sugar in the typical American's diet.
American Heart Association spokeswoman Rhian M. Touyz, MD, PhD, of the University of Ottawa, characterized the new research linking fructose to high blood pressure as intriguing but not conclusive in an interview with WebMD.
"It is clear that we need larger studies to confirm this association," she says. "We know that eating lots of sugar contributes to obesity, but we can't say with certainty that it has a direct impact on blood pressure."
SOURCES:
American Heart Association's 63rd High Blood Pressure Research Conference, Sept. 23, 2009.
Richard Johnson, MD, professor and head of the division of renal diseases and hypertension, University of Colorado-Denver.
Rhian M. Touyz, MD, PhD, senior scientist, professor of medicine, Kidney Research Centre, Ottawa Research Institute, University of Ottawa.
News release, American Heart Association.
The Journal of the American Medical Association, Aug. 27, 2008.
American Heart Association.
Top
From Medscape Diabetes & Endocrinology > Ask the Experts > Weight Management
High-Protein Diets
Sachiko T. St. Jeor, PhD, R.D.
Authors and Disclosures
Posted: 11/09/2000
Question
Recently in my clinical practice, I have come across a lot of patients going for chemical dieting with excess of protein intake and deficient intake of cholesterol and fats, both saturated and unsaturated. How does it affect the overall milieu of the body? It is claimed to reduce weight quite effectively. Also, please advise regarding its side effects if appropriate.
Habib Ahmad, MBBS, MD
What is the evidence behind the low-carbohydrate/high-protein diet currently available as "the Atkins Answer?"
Paul Cronin, MB ChB, BSc, DA
Response from Sachiko T. St. Jeor, PhD, R.D.
The Role of High-Protein Diets for Weight Loss
Recently, there has been a resurgence in the popularity of high-protein diets for weight, partially due to the growing epidemic of obesity and limitations of traditional treatment programs. These diets are attractive, because they offer liberality in food choices containing protein. However, they are generally restrictive in foods containing carbohydrates and as such often become monotonous over the long-term. The role of these diets in weight loss is limited because initial weight losses are usually difficult to maintain over the long-term.
There are currently no long-term, randomly assigned studies that demonstrate the safety and efficacy of high-protein diets over the long-term. Individuals who follow this diet, however, often experience significant initial weight losses due to the change in eating patterns and limitations in total intake of kilocalories. The corresponding switch from a high to low carbohydrate diet, which causes substantial water loss, is partially responsible for this weight loss.
It is important to note that high-protein diets promote a dramatic departure from normal eating patterns and can be self-limiting due to the foods they "allow." Most Americans already consume too much protein. Individuals who follow a high-protein diet tend to compensate their intake by dramatically decreasing their carbohydrates and increasing their fat intakes. The percentage of protein in the typical diet is generally stable and averages approximately 15% kcal/day. Very-high-protein diets, on the other hand, provide twice this amount.
High-protein diets are generally associated with higher intakes of total fat, saturated fat, and cholesterol compared with the average diet. Because food choices are limited, nutrient inadequacy can also become a problem. Individuals who follow these diets for a long period of time are at risk for compromised intakes of vitamins as well as potential cardiac, renal, bone, and liver abnormalities.
Indications for High-Protein Diets
High-protein diets are generally self-prescribed, following the advice of friends and popular books. Most professionals do not consider high-protein diets efficacious, safe, or palatable over the long-term. Instead, the majority promote a "balanced" eating pattern, which can support health maintenance overall.
High-protein foods are high in purines and as a source of uric acid may cause or exacerbate gout. A high-protein diet is especially risky for patients with diabetes as it may speed the progression of diabetic renal disease. Furthermore, high-protein diets limit intake of fruits, vegetables, nonfat dairy products, and whole grains, which have been associated with the balance of vitamins, mineral, fiber, and phytochemicals recommended for the treatment and prevention of conditions such as hypertension and osteoporosis. Protein is also the most expensive source of calories in the diet. High-protein diets may not be harmful for "healthy" individuals in the short-term, but they do promote unhealthy eating patterns and, therefore, may increase disease risk over the longer term.
Note: The Statement by the American Heart Association Nutrition Committee that provides the background for these answers is currently under review. Additionally, there are numerous ongoing studies and efforts at implementing long-term clinical trials to address the many questions surrounding high-protein diets.
Top
From Medscape Medical News
Postmenopausal Hormone Therapy May Modestly Reduce Gout Risk
Laurie Barclay, MD
July 23, 2009 — Menopause is linked to an increased risk for gout and postmenopausal hormone therapy to modestly reduced gout risk, according to the results of a prospective study reported online in the July 9 issue of the Annals of the Rheumatic Diseases.
"Despite the increase of gout incidence in recent years, and the substantial prevalence particularly in the older female population, the risk factors for gout in women remain ill defined," write A. Elisabeth Hak, MD, PhD, from Erasmus MC University Medical Center in Rotterdam, the Netherlands, and colleagues. "Sex hormones have been postulated to be associated with gout risk among women. Serum urate levels, which are closely associated with gout, increase substantially with age in women, whereas among men urate concentrations are not significantly different between middle and older age."
The goal of this analysis was to examine the association between menopause, postmenopausal hormone use, and risk of self-reported, physician-diagnosed, incident gout among 92,535 women without gout at baseline who were enrolled in the Nurses' Health Study. Multivariate proportional hazards regression analysis allowed adjustment for other risk factors associated with gout, including age, body mass index, diuretic use, hypertension, alcohol consumption, and diet.
There were 1703 incident gout cases documented during 16 years of follow-up (1,240,231 person-years). Gout incidence increased from 0.6 per 1000 person-years in women younger than 45 years to 2.5 in women 75 years or older (P for trend < .001). The risk for incident gout was higher in premenopausal women vs postmenopausal women (multivariate adjusted relative risk [RR], 1.26; 95% confidence interval [CI], 1.03 - 1.55).
Compared with women with ages 50 to 54 years at natural menopause, those younger than 45 years at natural menopause had a RR for gout of 1.62 (95% CI, 1.12 - 2.33). The risk for gout was decreased in women who used postmenopausal hormone therapy (RR, 0.82; 95% CI, 0.70 - 0.96).
"These prospective findings indicate that menopause increases the risk of gout, whereas postmenopausal hormone therapy modestly reduces gout risk," the study authors write. "This increase in the risk of gout tended to be more evident among women with surgical menopause and among women with younger age at natural menopause....These associations were independent of dietary and other risk factors for gout such as age, body mass index, diuretic use and hypertension."
Limitations of this study include reliance on self-report of physician-diagnosed gout.
The National Institutes of Health supported this study. Dr. Hak is the recipient of an Erasmus MC Fellowship (Erasmus MC University Medical Center, Rotterdam, the Netherlands) and has been supported by the Foundation "Vereniging Trustfonds Erasmus Universiteit Rotterdam," the Netherlands.
Ann Rheum Dis. Published online July 9, 2009.
Top
Management of Gout Often Requires Multiple Medications: Drug Treatment: Three-Pronged Attack
From Drugs & Therapy Perspectives
Effective Management of Gout Often Requires Multiple Medications
Posted: 06/18/2001; Drug Ther Perspect. 2001;17(12) © 2001 Adis Data Information BV
Introduction
Gout is among the most common causes of acute monoarticular arthritis. Most of the drugs prescribed to treat the condition were used before the introduction of modern clinical trials. Thus, although most of the drugs used in the treatment of gout have been proven to be clinically useful, there are few well-designed studies available to evaluate these drugs.
Acute gout can usually be easily managed when diagnosis and treatment are prompt. NSAIDs are usually the first choice for treating acute attacks. Colchicine is often used to prevent recurrences and can be discontinued, if the serum urate level is controlled, after the patient has been free of acute attacks for 1 to 3 months. Many patients with gout require long term therapy with xanthine oxidase inhibitors or uricosuric agents. Education of patients with gout about early treatment and avoidance of precipitating factors could lead to a better overall outcome.
A Painful Condition...
Gout is a clinical syndrome chiefly characterised by depositions of urate (monosodium urate monohydrate) crystals.[1] The crystals may be deposited in a joint, causing an acute inflammatory response, or in soft tissues. Gout typically occurs in middle age and more commonly in men.[1]
The main clinical features of gout are hyperuricaemia, acute monoarticular arthritis, tophi (nodular masses of monosodium urate crystals deposited in the soft tissues), chronic arthritis and nephrolithiasis.[1] The initial attack usually affects a single joint, although multiple joints can be affected, especially in women. Gouty arthritis primarily affects the peripheral joints, particularly those of the lower extremities. The first metatarsal phalangeal joint is involved in more than half of first attacks and in 90% of individuals at some time.[1]
...with a Characteristic Clinical Course
The clinical course of gout can be divided into 3 stages:[1]
a period of asymptomatic hyperuricaemia (typically lasting 20 to 40 years before the first attack of gouty arthritis occurs)
bouts of acute monoarticular arthritis, the patient remains symptom-free between attacks (over time, the interval between attacks becomes shorter and the attacks somewhat milder)
chronic arthritis with superimposed acute attacks, tophi may be present.
Don't Treat Asymptomatic Hyperuricaemia
Although hyperuricaemia is a major risk factor for the development of gout, acute gouty arthritis can occur in the presence of normal serum uric acid levels.[1] Conversely, the great majority of persons with hyperuricaemia never develop gout. In adults, serum levels of uric acid rise steadily over time and vary with height, bodyweight, blood pressure, renal function and alcohol (ethanol) intake. Hyperuricaemia in adults is defined as a serum urate level >7 mg/dl (>416 µmol/L).[1]
Treatment of asymptomatic hyperuricaemia is not recommended and it is neither cost effective nor beneficial.[1] Routine screening is not recommended but if hyperuricaemia is diagnosed, the cause should be determined and corrected if possible.
General Measures a Good Starting Point
Cold packs and rest are useful therapeutic aids to relieve the pain associated with acute gout.[2] Splints may be used in order to limit mobilisation of the joint in order to minimise pain.[3] Therapies that influence serum urate levels such as urate-lowering drugs, diuretics, cyclosporin or salicylates, and interventions such as a strict hypocaloric diet, should not be introduced or withdrawn during an acute attack, since changes in serum urate levels can increase the duration of acute gout symptoms.[4]
Analgesics should be considered as adjuvant therapy in patients with severe pain despite other pharmacological measures,[3] and also in patients with attacks resistant to conventional therapy.
Drug Treatment: Three-Pronged Attack
The approach to the drug treatment of gout involves medications to treat the acute attack, prevent future attacks and lower uric acid levels (see Patient care guidelines and Differential features table).[1] In addition to drug treatment, conditions that are associated with hyperuricaemia and gout (e.g. obesity and hyperlipidaemia) should be addressed. A high purine diet and alcohol intake can exacerbate hyperuricaemia and should be avoided. Patients should drink at least 8 glasses of liquids daily. Thiazides and loop diuretics can decrease the clearance of uric acid and reduce plasma volume and should be avoided if possible. Other drugs to avoid are low-dose aspirin (acetylsalicylic acid), ethambutol, pyrazinamide and nicotinic acid (niacin).[1]
Click to zoom- Patient Care Guidelines.
Early Treatment Important for Acute Gout
Early treatment (within 24 hours) is the key to effective treatment in an episode of acute gouty arthritis.[1,2] Delay in the initiation of therapy of acute gouty attacks can be avoided in patients with interval gout.[2] These patients commonly experience an acute gouty 'aura' before the attack reaches its greatest intensity and some patients may be able to take anti-inflammatory drugs for early control of gouty symptoms. Nevertheless, the presence of 'red flag' symptoms, such as high-grade fever and malaise, make medical consultation advisable.
NSAIDs First Choice
If the diagnosis is clear, NSAIDs are the preferred therapy for acute gout and are effective when used early in the attack.[1,2] Most drugs achieve clinical efficacy within 1 to 3 days of initiation of therapy[2] and the majority of patients experience complete resolution of an acute attack of gout within 5 to 8 days.[1]
One of the drawbacks of using NSAIDs is the propensity of these drugs to cause gastrointestinal adverse effects. High dosages of piroxicam, ketoprofen, indomethacin, naproxen, diclofenac and sulindac are most often used for the treatment of acute gout.[5] These drugs are associated with an intermediate risk of these adverse effects.[5] Azapropazone is considered to have the highest risk of the NSAIDs and its use should be reserved for those patients in whom treatment with less toxic drugs has been insufficient.[5] NSAIDs showing more selective COX-2-inhibiting activity (e.g. celecoxib, rofecoxib) are expected to be associated with a lower risk[see article entitled 'Will the promise of the COX-2 selective NSAIDs come to fruition?' Drugs & Therapy Perspectives 2001 Jun 4; 17 (11): 6-11].
The decision of whether to prescribe an NSAID and which agent should be chosen will be largely determined by any associated conditions (diminished hepatic or renal function, hypertension, recent bleeding, peptic ulcer and advanced age) and concomittant therapies (anticoagulant drugs) in the patient.
Colchicine More Specific, More Toxic Too
Colchicine, the traditional agent used, is more specific for gout than the NSAIDs. Acute gouty arthritis is a neutrophil-mediated inflammatory reaction.[7] Colchicine inhibits this inflammatory reaction by interfering with the assembly of cell microtubules[8] which in turn limits the capacity of the neutrophil to migrate, affecting phagocytic function and the release of inflammatory mediators.[9,10]
Colchicine has anti-inflammatory but no analgesic effects. Oral doses of 0.5 to 0.6mg hourly or 1mg every 2 hours, up to a maximum of 6mg, work very well in acute gout, and are probably as effective as the NSAIDs,[5] if given within the first 24 hours of onset of attack. However, up to 80% of people cannot tolerate the drug because of abdominal pain, nausea and diarrhoea.[1]
Intravenous colchicine can be used if the oral route is not available or gastrointestinal adverse effects have to be avoided. However, this route is potentially dangerous, with possible severe adverse effects including bone marrow suppression and renal or hepatic damage.[11] This form of therapy should be avoided in patients with hepatic or renal disease, and very close monitoring is essential. Concomitant oral and intravenous colchicine should not be administered and oral colchicine should not be started for at least a week after intravenous administration.[2]
Corticosteroids Helpful For Some
Corticosteroid therapy, either oral or parenteral, is effective in patients who are unable to take or tolerate NSAIDs and colchicine, in whom bacterial arthritis is ruled out.[1,2] Monoarticular gout responds well to corticosteroids given by intra-articular injection, although postinjection flare-up may complicate this technique. Systemic therapy can be used when more than 1 joint is involved or the patient is refractory to other treatments. Dosage adjustment is required in elderly patients and those with chronic renal or hepatic failure.[2] Corticosteroids should not be given to patients with diabetes mellitus without careful monitoring.[2]
Corticotropin Works But Not Preferred
Although the tolerability and efficacy of intramuscular corticotropin in the treatment of acute gout has been demonstrated in a number of studies, there is no convincing evidence that such therapy is superior to oral corticosteroids, except in patients who cannot take oral medications.[12,13] Drawbacks of corticotropin include the dependence of therapeutic effects on the sensitivity of the adrenal cortex (the drug may be ineffective after treatment with corticosteroids), increased release of adrenal androgens and mineralocorticoids (which can lead to fluid overload), and a relatively short duration of action, with a greater potential for rebound attacks and treatment failures, possibly requiring repeated parenteral administration.[1,2] The drug should not be administered to patients receiving long term corticosteroid therapy.[2]
Prophylaxis Often Warranted
Prophylaxis of acute gout attacks is indicated during interval gout and at the onset of urate-lowering therapy.[2] Low doses of colchicine may be effective in preventing flares of gout associated with the fall in serum urate levels occurring when urate-lowering therapy is initiated, and prophylaxis should be started prior to the initiation of urate-lowering drugs.[1,2]
Colchicine is more commonly used for the prophylaxis of acute attacks than for treatment.[2] Prophylaxis with colchicine clearly reduces the rate of recurrent attacks, whether or not the serum urate concentration is normal.[14] However, long term therapy can lead to toxicities such as neuromyopathy, myelotoxicity and alopecia.[1,2] Therapy with colchicine can be discontinued once the serum urate level has been controlled and the patient has not had an acute attack for 1 to 3 months. NSAIDs can be useful if colchicine alone is insufficient.[1] Low doses of NSAIDs could also be considered for prophylaxis in patients intolerant of colchicine.[2]
Urate Levels May Need Lowering...
Treatment for hyperuricaemia should be initiated in patients with frequent gout attacks, tophi or urate nephropathy.[1] Reduction of serum urate levels to <6 mg/dl (<357 µmol/L) generally reduces the recurrence of gouty arthritis, but levels <5 mg/dl (<297 µmol/L) may be necessary for resorption of tophi.[15]
...but Best Drug Choice Debated
There are 2 choices of therapy for lowering uric acid levels: allopurinol and uricosuric drugs. There is some controversy in determining treatment for each individual patient. The need to measure 24-hour urinary acid levels, allowing determination of whether a patient's hyperuricaemia is caused by urate overproduction or decreased excretion, has been debated. The proponents of this evaluation maintain that an overproducer should be treated with allopurinol and an underexcreter should be prescribed a uricosuric agent.[16] However, one can begin urate-lowering therapy with allopurinol without measuring uric acid excretion in most patients, as this drug is effective regardless of the cause of hyperuricaemia.[1]
Allopurinol. Allopurinol is a xanthine oxidase inhibitor. It causes a detectable decrease in serum urate levels within the first 24 hours and an expected maximum reduction within 2 weeks.[1] Not only is allopurinol effective in both overproducers and underexcreters of uric acid, it also has a number of other advantages compared with uricosuric drugs: it can be used in a single daily dose, causes fewer adverse effects, is associated with fewer drug interactions and is effective in patients with renal failure (although it should be administered cautiously in these patients) or a history of nephrolithiasis.
Uricosuric Agents. Although allopurinol is often preferred as the choice of urate-lowering therapy, there is still a group of patients in whom uricosuric agents can be used. The patients must be compliant with treatment, be younger than 60 years, have normal renal function, underexcrete uric acid and have no history of nephrolithiasis.[1]
The most commonly used uricosuric agents are probenecid and sulfinpyrazone. Probenecid works at the level of the proximal tubule by blocking reabsorption of filtered uric acid. Sulfinpyrazone is preferred by some because of its added antiplatelet effects. These drugs are administered up to 4 times daily. A third uricosuric drug is benzbromarone, an inhibitor of the tubular reabsorption of urate. It can be used in patients with renal insufficiency, and may be better tolerated by the elderly.[1]
By promoting uric acid excretion, uricosuric agents may precipitate nephrolithiasis. This rare complication can occur early in the course of treatment and may be prevented by initiating therapy at low doses, forcing hydration, and possibly alkalinising the urine.[17]
Simple Analogy Aids Patient Education
One of the major reasons for treatment failure is the inability of some patients to understand the complexities of treatment, which at times involves using 3 medications according to different schedules.[1] The 'match' analogy is a simple way to help patients understand the different medications and use them appropriately.[6] It can be explained to patients that gout is caused by uric acid and its salts. In patients with gout, these substances accumulate around the joints and may be imagined as a bunch of matches in the area. When a gout attack occurs, it is as though one of the matches strikes and catches the joint on fire. When that happens, an NSAID should be taken at the very first sign of an attack. If not taken soon enough, more matches will catch fire and the gout attack will become much worse. However, taking an NSAID does not cure the gout, it only puts out the fire. The matches will still be there after an attack and can light again. Additional medication is required to get rid of the matches, or lower the uric acid levels. Because it takes some time for urate-lowering therapy (e.g. allopurinol) to work, gout attacks can still occur. This is why another medication (colchicine) is given. Colchicine can prevent gout attacks by keeping the matches damp, making it hard for them to catch fire. This usually prevents gout, but if signs of an attack do begin, an NSAID should be taken.
News From ACR 2010
This coverage is not sanctioned by, nor a part of, the ACR.
From Medscape Medical News
Fructose Intake Associated With an Increased Risk for Gout
Emma Hitt, PhD
November 16, 2010 (Atlanta, Georgia) — Consuming sugar-sweetened sodas, orange juice, and fructose is associated with an increased risk for incident gout, according to new research findings from the Nurses' Health Study.
Hyon K. Choi, MD, DrPH, from Boston University School of Medicine, Massachusetts, presented the findings here at the American College of Rheumatology 2010 Annual Meeting. The results were also published online November 10 in the Journal of the American Medical Association.
The main message, said Dr. Choi, is that "if your patient has hyperuricemia, or gout, and if they are consuming sugary beverages, particularly containing fructose (i.e., sugar, not artificial sweeteners), then I would recommend them stopping or at least reducing their intake," Dr. Choi told Medscape Medical News.
Dr. Choi and colleagues analyzed data from the Nurses' Health Study, an American prospective cohort study spanning 22 years, from 1984 to 2006. Women with no history of gout at baseline (n = 78,906) provided information about their intake of beverages and fructose by filling out validated food frequency questionnaires.
Over the course of the study, 778 incident cases of gout were reported. Compared with the consumption of less than 1 serving per month of sugar-sweetened soda, the consumption of 1 serving per day was associated with a 1.74-fold increased risk for gout, and the consumption of 2 or more servings per day was associated with a 2.39-fold increased risk (P < .001 for trend).
Consumption of orange juice was associated with a 1.41-fold and 2.42-fold increased risk for 1 and 2 servings per day, respectively (P = .02 for trend).
For 1 and 2 servings of sugar-sweetened soda, the absolute risk differences were 36 and 68 cases per 100,000 person-years, respectively; for 1 and 2 servings of orange juice, the absolute risk differences were 14 and 47 cases per 100,000 person-years, respectively.
The consumption of diet soft drinks was not associated with the risk for gout (P = .27 for trend).
Compared with the lowest quintile of fructose intake, the multivariate relative risk for gout in the top quintile was 1.62 (95% confidence interval, 1.20 - 2.19; P = .004 for trend), indicating a risk difference of 28 cases per 100,000 person-years.
According to Dr. Choi, the mechanism of fructose and its effect on the pathology of gout is well understood.
"Administration of fructose to human subjects results in a rapid increase in serum uric acid and increased purine synthesis," he explained. "In addition, this effect is more pronounced in individuals with hyperuricemia or a history of gout."
In the published paper, the authors point out that because "fructose intake is associated with increased serum insulin levels, insulin resistance, and increased adiposity, the overall negative health effect of fructose is expected to be larger in women with a history of gout, 70% of whom have metabolic syndrome."
According to independent commentator George Bray, MD, from the Pennington Biomedical Research Center in Baton Rouge, Louisiana, this is another "nail in the coffin for the overuse of fructose-containing beverages."
"In a previous report, gout in men was associated with a higher intake of fructose (either sugar or high-fructose corn syrup from beverages)," he told Medscape Medical News. "This paper extends this using the Nurses' Health Study to show that the higher intake of fructose (soft drinks and juices) is associated with an increased risk of gout in women."
Dr. Bray added that it would be a good idea to include the fructose content of foods and beverages on the label for the public's information.
The study was not commercially funded. Dr. Choi reports receiving research grants and consulting fees from Takeda Pharmaceuticals North America. Dr. Bray has disclosed no relevant financial relationships.
JAMA. Published online November 10, 2010. Abstract
ACR 2010 Annual Meeting: Abstract L5. Presented November 10, 2010.
Prophylaxis Often Warranted
Prophylaxis of acute gout attacks is indicated during interval gout and at the onset of urate-lowering therapy.[2] Low doses of colchicine may be effective in preventing flares of gout associated with the fall in serum urate levels occurring when urate-lowering therapy is initiated, and prophylaxis should be started prior to the initiation of urate-lowering drugs.[1,2]
Colchicine is more commonly used for the prophylaxis of acute attacks than for treatment.[2] Prophylaxis with colchicine clearly reduces the rate of recurrent attacks, whether or not the serum urate concentration is normal.[14] However, long term therapy can lead to toxicities such as neuromyopathy, myelotoxicity and alopecia.[1,2] Therapy with colchicine can be discontinued once the serum urate level has been controlled and the patient has not had an acute attack for 1 to 3 months. NSAIDs can be useful if colchicine alone is insufficient.[1] Low doses of NSAIDs could also be considered for prophylaxis in patients intolerant of colchicine.[2]
Urate Levels May Need Lowering...
Treatment for hyperuricaemia should be initiated in patients with frequent gout attacks, tophi or urate nephropathy.[1] Reduction of serum urate levels to <6 mg/dl (<357 µmol/L) generally reduces the recurrence of gouty arthritis, but levels <5 mg/dl (<297 µmol/L) may be necessary for resorption of tophi.[15]
...but Best Drug Choice Debated
There are 2 choices of therapy for lowering uric acid levels: allopurinol and uricosuric drugs. There is some controversy in determining treatment for each individual patient. The need to measure 24-hour urinary acid levels, allowing determination of whether a patient's hyperuricaemia is caused by urate overproduction or decreased excretion, has been debated. The proponents of this evaluation maintain that an overproducer should be treated with allopurinol and an underexcreter should be prescribed a uricosuric agent.[16] However, one can begin urate-lowering therapy with allopurinol without measuring uric acid excretion in most patients, as this drug is effective regardless of the cause of hyperuricaemia.[1]
Allopurinol. Allopurinol is a xanthine oxidase inhibitor. It causes a detectable decrease in serum urate levels within the first 24 hours and an expected maximum reduction within 2 weeks.[1] Not only is allopurinol effective in both overproducers and underexcreters of uric acid, it also has a number of other advantages compared with uricosuric drugs: it can be used in a single daily dose, causes fewer adverse effects, is associated with fewer drug interactions and is effective in patients with renal failure (although it should be administered cautiously in these patients) or a history of nephrolithiasis.
Uricosuric Agents. Although allopurinol is often preferred as the choice of urate-lowering therapy, there is still a group of patients in whom uricosuric agents can be used. The patients must be compliant with treatment, be younger than 60 years, have normal renal function, underexcrete uric acid and have no history of nephrolithiasis.[1]
The most commonly used uricosuric agents are probenecid and sulfinpyrazone. Probenecid works at the level of the proximal tubule by blocking reabsorption of filtered uric acid. Sulfinpyrazone is preferred by some because of its added antiplatelet effects. These drugs are administered up to 4 times daily. A third uricosuric drug is benzbromarone, an inhibitor of the tubular reabsorption of urate. It can be used in patients with renal insufficiency, and may be better tolerated by the elderly.[1]
By promoting uric acid excretion, uricosuric agents may precipitate nephrolithiasis. This rare complication can occur early in the course of treatment and may be prevented by initiating therapy at low doses, forcing hydration, and possibly alkalinising the urine.[17]
Simple Analogy Aids Patient Education
One of the major reasons for treatment failure is the inability of some patients to understand the complexities of treatment, which at times involves using 3 medications according to different schedules.[1] The 'match' analogy is a simple way to help patients understand the different medications and use them appropriately.[6] It can be explained to patients that gout is caused by uric acid and its salts. In patients with gout, these substances accumulate around the joints and may be imagined as a bunch of matches in the area. When a gout attack occurs, it is as though one of the matches strikes and catches the joint on fire. When that happens, an NSAID should be taken at the very first sign of an attack. If not taken soon enough, more matches will catch fire and the gout attack will become much worse. However, taking an NSAID does not cure the gout, it only puts out the fire. The matches will still be there after an attack and can light again. Additional medication is required to get rid of the matches, or lower the uric acid levels. Because it takes some time for urate-lowering therapy (e.g. allopurinol) to work, gout attacks can still occur. This is why another medication (colchicine) is given. Colchicine can prevent gout attacks by keeping the matches damp, making it hard for them to catch fire. This usually prevents gout, but if signs of an attack do begin, an NSAID should be taken.
Abstract and Introduction
Introduction
Nearly 72 million people in the United States have hypertension (HTN), and one out of three American adults has HTN. In addition, one-third of people with HTN are unaware they even have high blood pressure (BP), which is why HTN is often referred to as "the silent killer."[1] Hypertension is defined as a BP >140/90 millimeters of mercury (mmHg). As BP rises, risk increases for heart failure, myocardial infarction, kidney disease, and stroke. For each 20 mmHg increase in systolic blood pressure (SBP) or 10 mmHg increase in diastolic blood pressure (DBP) above 115/75 mmHg, the risk of cardiovascular disease doubles.[2] A recent study conducted in nondiabetic patients supports treating to a target SBP <130 mmHg versus a target SBP <140 mmHg. The group achieving the lower SBP experienced significantly less development of left ventricular hypertrophy and cardiovascular events than the group treated to the usual SBP goal.[3] Current Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) classification and treatment of BP for adults is given in TABLE 1 .
Various lifestyle risk factors have been identified that elevate blood pressure and lead to HTN. Many of these risk factors have been well documented in the literature, and according to recent trials or new research awaiting publication, others have recently been postulated to affect BP ( TABLE 2 ). A healthy lifestyle is essential to preventing HTN and managing it successfully. Lifestyle modifications should be incorporated into every treatment regimen for prehypertension and HTN ( TABLE 3 ). Implementation of a healthy lifestyle decreases BP, reduces cardiovascular disease risk, and increases the efficacy of antihypertensive medications.[2]
Conventional Risk Factors for Developing Hypertension
Hypertension can develop because of a person's lifestyle, medication regimen, underlying health conditions, genetic history, or a combination of these factors. Nonmodifiable risk factors include advancing age, race, family history of HTN or premature heart disease, and other concurrent health conditions. Some of these health conditions include adrenal tumors, chronic kidney disease, congenital heart defects, diabetes, thyroid disorders, pheochromocytoma, and pregnancy. Hypertension is more common in African Americans and appears to develop at an earlier age in this race. Medications that may cause HTN include caffeine, chronic steroid therapy, oral contraceptives, nonsteroidal anti-inflammatory drugs (NSAIDs), cyclooxygenase-2 (COX-2) inhibitors, amphetamines and other stimulant drugs, cocaine, decongestants, weight loss drugs, cyclosporine and other immunosuppressants, erythropoietin, and OTC supplements (e.g., ephedra, licorice, ma huang).[2]
Established Lifestyle Risk Factors for Developing Hypertension
There are many modifiable risk factors for HTN, and the list seems to be growing steadily with ongoing research. Cigarette smoking is the single most common avoidable cause of cardiovascular death in the world.[4] Data from the CDC show that 21% of adults (18 years of age and older) in the U.S. currently smoke cigarettes.[5] Those who smoke 15 or more cigarettes per day have a higher incidence of HTN. Smoking immediately raises BP and heart rate transiently through increasing sympathetic nerve activity and myocardial oxygen consumption. Chronically, tobacco chemicals damage the lining of the arterial walls of the heart, resulting in artery stiffness and narrowing that can last for 10 years after smoking cessation. Smoking also increases the progression of renal insufficiency and risk of other cardiovascular complications.[4,6]
Obesity is estimated to be the leading cause of preventable illness in the U.S. Greater than two-thirds of HTN prevalence can be attributed to obesity.[7] The National Heart, Lung, and Blood Institute (NHLBI) defines obesity as having a body mass index (BMI) ?30 kg/m2.[2] Results from the National Health and Nutrition Examination Survey (NHANES, 2005–2006) indicate that 34.3% of the U.S. adult population is obese.[8] Obesity is most pronounced in the southeast region of the country. Overweight prevalence among children and adolescents also remains high in the U.S., with 10% of U.S. children classified as overweight or obese.[7,8] Abdominal adiposity, in particular, is linked to congestive heart failure, coronary artery disease, diabetes, sleep apnea, and stroke. Being overweight requires that more blood be supplied to oxygenate heart tissues, and as the circulated blood volume increases through the blood vessels, the pressure increases on the artery walls.[6,7]
Besides obesity, a lack of physical activity and sedentary lifestyle produce an increase in heart rate. An increased heart rate requires that the heart work harder with each contraction, and it exerts a stronger force on the arteries, thereby raising BP. Physical inactivity has also been linked to more health care office visits, hospitalizations, diabetes, and increased medication burden.[6,9]
Multiple dietary factors increase the risk for HTN. It is well known that excessive sodium intake leads to HTN. A diet high in salt causes the body to retain fluid, and increased water movement raises the pressure within the vessel walls.[6] The majority of the sodium in Western-style diets is derived from processed foods. High-salt diets decrease the effectiveness of antihypertensives in patients with resistant HTN. Resistant HTN is defined as having a BP above one's goal despite using three or more antihypertensive medications concurrently.[10] A high-salt diet can also increase the need for potassium. Potassium balances the amount of sodium within cells. If not enough potassium is consumed or retained, sodium accumulates in the blood. A diet low in potassium (<40 mEq/day) produces sodium accumulation through decreased sodium excretion, thereby leading to HTN. Potassium deficiency also increases the risk for stroke.[6,11]
Excessive alcohol consumption consisting of greater than two drinks per day for men or greater than one drink per day for women leads to sustained BP elevations.[2] Alcohol interferes with blood flow by moving nutrient-rich blood away from the heart.[12] Alcohol can also reduce the effectiveness of antihypertensives. Binge drinking, or having at least four drinks consecutively, may cause significant and rapid increases in BP.[13] Debate exists on whether low-to-moderate alcohol consumption raises or lowers BP.
Emerging Risk Factors for Developing Hypertension
A diet high in sugar, fructose in particular, raises BP in men, according to a recent study presented at the American Heart Association's (AHA) 2009 High Blood Pressure Research Conference.[14] High fructose consumption has also been linked to an increased risk of obesity. Fructose is a dietary sugar that is used in corn syrup and accounts for one-half of the sugar molecules in table sugar. High-fructose corn syrup is often utilized in packaged sweetened products and drinks due to its long shelf life and low cost. In this study, men consuming a high-fructose diet for just 2 weeks experienced an increased incidence of HTN and metabolic syndrome.[14]
Vitamin D deficiency (<80 nmol/L) may increase the risk of developing systolic HTN in premenopausal women years later, according to a study conducted in Caucasian women in Michigan.[15] In this study, presented at the AHA's High Blood Pressure Research Conference, researchers compared BP and vitamin D levels drawn in 1993 to those drawn 15 years later in 2007. Premenopausal women (average age of 38 years) with vitamin D deficiency in 1993 were three times more likely to have HTN in 2007 than those with normal vitamin D levels in 1993.[15]
Sleep deprivation raises SBP and DBP and may lead to HTN. In the recent Coronary Artery Risk Development in Young Adults (CARDIA) sleep study, sleep maintenance and sleep duration were measured in a group of adults aged 35 to 45 years and then repeated 5 years later on the same study population.[16] According to this study, shorter sleep duration and poor sleep quality increase BP levels and lead to HTN. Sleep deprivation may produce an increase in heart rate and sympathetic activity, evolving into HTN.[16]
A connection has been found between HTN and road traffic noise. An Environmental Health study published in 2009 measured loudness of road noise in decibels at the home address in a large number of adults and their incidence of self-reported HTN. A significant association was found for incidence of HTN and residing near a noisy road. Interestingly, a less prominent effect on BP was noted in the elderly when compared to younger adults. Possible explanations offered by the authors include that noise may be harder to detect in the elderly and may be less of an annoyance in the older population than in younger individuals. The study authors speculate that long-term exposure to noise causes endocrine and a sympathetic stress response on a middle-aged adult's vascular system, resulting in HTN and an elevated cardiovascular risk profile.[17]
A questionnaire completed by deployed American servicemen and servicewomen revealed that those reporting multiple exposures to combat had a significantly higher incidence of HTN than those reporting no combat. The elevation in BP is thought to arise from the high stress situation of combat exposure. Combat stress can result in significant physical and psychosocial stress to those deployed.[18]
Lifestyle Modifications for Treatment of Hypertension
Cigarette smoking is a modifiable cardiovascular risk factor that can have profound effects. Smoking cessation can result in immediate improvement in BP and heart rate after just 1 week.[19] A linear relationship has been discovered in improvement in arterial wall stiffness and duration of smoking cessation in ex-smokers. Achievement of a decade of smoking cessation results in remodeling to nonsignificant levels of arterial stiffness.[20] In addition to lowering BP, smoking cessation results in an overall cardiovascular risk reduction and reduction in mortality. Rigorous measures should be utilized to assist individuals in achieving smoking cessation.[2] Smoking cessation should be assessed and discussed at every available opportunity, whether it be inpatient, outpatient, or at the pharmacy. Studies have shown that when patients are told their lung age, they are more likely to quit smoking.[21] Pharmacists possess an enormous opportunity to assist patients in achieving smoking cessation by teaching patients about the various smoking cessation pharmacotherapy options. An explanation of how to properly use the medications (OTC and prescription), differences between them, and what to expect from the medications can improve adherence and the desired outcome of successful smoking cessation.
Weight reduction can have the most profound effect of all lifestyle modifications on lowering BP, leading to an approximate drop in SBP of 5 to 20 mmHg per 10 kg weight loss. The JNC 7 guidelines recommend weight reduction to maintain a normal body weight defined as a BMI between 18.5 and 24.9 kg/m2.[2] The Surgeon General's recommendations published by the U.S. Department of Health and Human Services advise determining a person's BMI and having him or her lose at least 10% of body weight if overweight or obese. It is also recommended to lose weight gradually at a pace of one-half to two pounds per week.[22]
Along with weight reduction, regular aerobic physical activity for 30 minutes or more per day most days of the week is recommended and results in an SBP improvement of 4 to 9 mmHg.[2] It is recommended that children be physically active for 60 minutes most days of the week. The Surgeon General recommends limiting television viewing to below 2 hours per day.[22]
The JNC 7 guidelines recommend multiple dietary modifications. The most notable and effective is adoption of the Dietary Approaches to Stop Hypertension (DASH) eating plan, which can lower SBP by 8 to 14 mmHg.[2] The DASH eating plan is equally efficacious to adding on a single antihypertensive medication. This diet plan includes a significant consumption of fruits and vegetables rich in potassium, which assists in maintaining optimal sodium to potassium ratio. The DASH eating plan is low in saturated fat and consists of low-fat dairy products. Sodium restriction is an important component of the DASH diet and also recommended independently in the JNC 7 guidelines. A reduction in sodium intake to ?100 mmol/day (6 g NaCl or 2.4 g sodium) can drop SBP by 2 to 8 mmHg. The DASH diet also provides details on how to check labels for sodium content and how to estimate sodium amounts in foods based on how they are cooked or prepared when eating in restaurants.[2,23] The Surgeon General also recommends selecting sensible portions.[22]
Limiting alcohol consumption to two drinks or less for most men and one drink per day or less for women is recommended by the JNC 7 guidelines. The equivalency of two drinks is defined as 24 oz of beer, 1 oz of ethanol (e.g., vodka, gin), 3 oz of 80-proof whiskey, or 10 oz of wine. A decrease in alcohol intake can lower SBP by 2 to 4 mmHg.[2]
Plausible Lifestyle Modifications for Treatment of Hypertension
Lowering fructose intake through limiting consumption of sweetened products could prevent rises in BP and development of metabolic syndrome. Reducing intake of sweetened drinks or processed foods that contain high-fructose corn syrup and lessening use of regular table sugar will lower intake of fructose.[14]
Vitamin D deficiency is widespread among women. It is speculated by some researchers that many women do not receive adequate sun exposure, obtain enough vitamin D in their diet, or supplement with enough vitamin D. The current recommended intake of vitamin D for this population is 400 to 600 IUs per day, though some researchers suggest a higher intake of daily vitamin D. Knowing one's vitamin D level and obtaining adequate vitamin D through diet and/or supplementation may prevent HTN.[15]
A randomized, controlled trial published in 2007 demonstrated that regular consumption of a small amount of dark chocolate has been shown to mildly reduce BP (-2.9 mmHg systolic and -1.9 mmHg diastolic average) in people with stage 1 HTN or prehypertension. The study population did not have other cardiovascular risk factors and were not taking antihypertensive medications. This study compared daily intake (30 kcal, or the equivalent of a Hershey's Kiss) of dark chocolate and white chocolate for 18 weeks. The group receiving white chocolate had no improvement in BP. It is suspected that the polyphenols in the dark chocolate lower BP.[24]
A recent study explored the effects of various milk and cheese products on developing HTN in adults aged 55 years and older living in the Netherlands. It was discovered after 6 years that higher dairy intake was associated with lower rates of HTN. The authors concluded that consumption of low-fat dairy products may prevent HTN in older individuals.[25] Another study conducted in U.S. women aged 45 years and older showed similar results with intake of low-fat dairy products, but not with supplements of calcium or vitamin D.[26]
Lastly, various studies have shown that ownership of a dog or cat lowers a person's BP. Whether this is accomplished through increased exercise or the psychological effects of a human-animal connection is yet to be fully established. Health benefits of pet ownership include BP reductions, a reduction in triglyceride levels, improved exercise habits, decreased feelings of loneliness, and decreased stress levels.[27,28]
Conclusion
A person's way of life can have substantial effects on his or her health, including the risk of developing HTN. Numerous lifestyle risk factors have been implicated in the development of HTN; likewise, several lifestyle modifications effectively lower BP. Alterations in lifestyle are essential to prevention and treatment of HTN and can decrease the need for one or more prescription medications. Lifestyle changes to lower BP can additionally correct obesity, lower cardiovascular risk, decrease insulin resistance, improve drug efficacy, and enhance antihypertensive effect. Greater BP reductions are achieved if two or more lifestyle adjustments are made concurrently. Assisting and motivating patients to make lifestyle changes to lower their BP to goal levels is recommended by the JNC 7 guidelines yet is often underutilized by health care clinicians. It is imperative that pharmacists be knowledgeable in risk factors and treatments for HTN and express interest in having patients reach their BP goals. Studies have proven that involvement of a pharmacist in the treatment of hypertensive patients can result in improved BP control through adoption of lifestyle modifications, proper antihypertensive selection, and better adherence to medications.[2,29]
Top
From Journal of the American Pharmacists Association
Therapeutic Lifestyle Changes and Pharmaceutical Care in the Treatment of Dyslipidemias in Adults
Thomas L. Lenz
Authors and Disclosures
Posted: 08/17/2005; J Am Pharm Assoc. 2005;45(4):492-502. © 2005 American Pharmacists Association
Abstract and Introduction
Abstract
Objective: To review each therapeutic lifestyle change (TLC) component listed in the National Cholesterol Education Program (NCEP) Adult Treatment Panel III (ATP III) cholesterol guidelines and discuss how the guidelines can be used by pharmacists in the treatment of patients with dyslipidemias.
Data Sources: Published guidelines and abstracts identified through PubMed (May 1987-March 2004), Medline (January 1966-March 2004), using the search terms cholesterol, hypercholesterolemia, dyslipidemia, hyperlipidemia, diet, saturated fats, unsaturated fats, trans-fatty acids, overweight, obese, exercise, physical activity, program adherence, and guidelines; as well as the NCEP ATP III guidelines, the 2004 ATP III update, National Heart, Lung, and Blood Institute Obesity Guidelines, and Dietary Guidelines for Americans 2005.
Study Selection: Performed manually by author.
Data Extraction: Performed manually by author.
Data Synthesis: TLC components are recommended in the NCEP ATP III guidelines for treatment of patients with dyslipidemias independent of medication use. Dietary modifications are the primary focus of TLC therapy. Saturated fat intake should be limited to less than 7% of total caloric intake and trans-fatty acid intake should be low for patients with dyslipidemias. Persons who are overweight or obese with dyslipidemias should reduce body weight through a combination of physical activity, total calorie reduction, and behavior therapy modifications.
Conclusion: Pharmacists, given the proper training, can be effective at offering preventive pharmaceutical care for decreasing high blood cholesterol and the risk for coronary heart disease through patient counseling on TLC components as well as drug therapy in patients with dyslipidemias.
Introduction
In September 2002 the third Report of the National Cholesterol Education Program Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (NCEP ATP III) was published.[1] High blood cholesterol has been shown to cause atherosclerotic plaque to accumulate in the coronary arteries, leading to the development of coronary heart disease (CHD). The focus of the ATP III guidelines is on short-term prevention of acute coronary syndromes as well as the need for long-term prevention of coronary atherosclerosis. In 2004, an update to the NCEP ATP III guidelines was published that placed greater importance on treating high-risk CHD patients more aggressively.[2] To address the risks associated with high blood cholesterol and atherosclerosis, ATP III recommends several lifestyle modifications aimed at reducing the long-term risk for CHD.[1]
The clinical management of lifestyle therapies is termed therapeutic lifestyle changes (TLC) in the ATP III guidelines. The components of lifestyle therapy were chosen on the basis of their ability to lower both serum cholesterol and the risk for CHD. TLC involves a multifactorial lifestyle approach in which patients participate in several, if not all, TLC components[1]: (1) reduced intake of saturated fats and cholesterol; (2) therapeutic dietary options (e.g., plant stanols/sterols and increased viscous [soluble] fiber) for lowering low-density lipoprotein cholesterol (LDL-C); (3) weight reduction; and (4) increased regular physical activity.
Each of the TLC components is reviewed in detail in this article so that pharmacists can increase their understanding of additional therapies that may be used alone or in conjunction with medications to decrease the risk of CHD in patients with dyslipidemias.
Dietary Modifications
Dietary modifications are the primary focus of ATP III TLC therapy. The TLC diet is in general consistent with the Dietary Guidelines for Americans published by the U.S. Department of Agriculture and U.S. Department of Health and Human Services.[3] Lowering LDL-C is the first priority with diet modifications.[1] Once patients have reached their LDL goals, weight control and physical activity becomes the treatment priority.[1]
Lowering LDL-C through dietary modifications can most successfully be accomplished by reducing the dietary intake of saturated fats, trans-fatty acids, and cholesterol.[1,2] The ATP III guidelines recommend that total fat consumption should not exceed 25% to 35% of total daily caloric intake.[1,2] Most of the fat calorie consumption should be obtained from unsaturated fatty acids.[1-4] Polyunsaturated fats and monounsaturated fats make up total unsaturated fatty acids and should represent approximately 10% and 20%, respectively, of total daily caloric intake.[1,4] The remaining amount of fat intake (less than 7% of total calories) can come from saturated fats.[1,4]
Saturated Fatty Acids
Saturated fats have been shown to be closely associated with increases in total and LDL cholesterol.[1,4] Studies show that every 1% increase in calories of dietary saturated fatty acids correlates to a 2% increase in LDL-C.[1,5,6] The opposite has also shown to be true. Every 1% decrease in calories of dietary saturated fatty acids correlates to a 2% decrease in LDL-C.[1,5-8] Saturated fatty acids are those that contain all the hydrogen atoms that the carbon atoms can hold.[4] At room temperature, saturated fatty acids are usually in a solid state and do not combine readily with oxygen.[4] The main sources of saturated fats in the typical American diet are from animals and some plants.[3,4] Examples of food sources high in saturated fatty acids are listed in Table 1 .
Polyunsaturated and Monounsaturated Fatty Acids
To limit the intake of saturated fatty acids, the American Heart Association recommends substituting unsaturated fatty acids in their place.[3,4] By definition, unsaturated fatty acids have at least one unsaturated hydrogen bond on the molecule. Polyunsaturated oils are liquid at room temperature and when stored in the refrigerator.[4] They have been shown to help reduce the amount of newly formed cholesterol and can help lower blood cholesterol levels when substituted for saturated fatty acids in the diet.[1,4] Monounsaturated oils are liquids at room temperature but start to become solid when refrigerated. Monounsaturated fatty acids have also been shown to reduce blood cholesterol when consumed with a diet very low in saturated fats.[1,4] Unsaturated fatty acids are most often found in the American diet in liquid vegetable oils.[4] Examples of polyunsaturated and monounsaturated fatty acids are listed in Table 1 .
Trans-Fatty Acids
Fatty acids contain a chemical make-up that consists of carbon atoms connected by double bonds. Saturated fatty acids that occur in nature have hydrogen atoms attached to the same side of the double carbon bonds, in the cis position. Trans-fatty acids have hydrogen atoms connected to the opposite side of the double carbon bonds, in the trans position. Trans double bonds occur in nature and are found in meat and dairy products but can be artificially created through the hydrogenation of vegetable and fish oils.[4]
A rise in LDL-C levels has been associated with an increased dietary intake of trans-fatty acids.[1,9-21] In addition, trans-fatty acids have been associated with a higher risk for CHD.[1,22-25] ATP III guidelines recommend keeping the intake of trans-fatty acids low and to try to use liquid vegetable oil, soft margarine, and margarine free of trans-fatty acid in place of stick margarine, butter, and shortening.[1] Because no standard method currently exists for measuring trans-fatty acid content in food, dietary intake is difficult to estimate. The American Heart Association recommends the following to decrease the intake of dietary of trans-fatty acids[4]:
Use unhydrogenated vegetable oil such as canola or olive oil.
When choosing processed foods, look for those made with unhydrogenated vegetable oil rather than from hydrogenated or saturated fats.
Substitute soft margarines (liquid or tub) for butter or margarines that are harder or in stick form. Choose margarines with liquid vegetable oil as the first ingredient and that contain no more than 2 grams of saturated fat per tablespoonful.
Avoid foods such as French fries, doughnuts, cookies, and crackers, because they contain high amounts of trans-fatty acid.
Dietary Cholesterol
The national average intake of dietary cholesterol in the United States is 256 mg per day.[1,26] Approximately one third of dietary cholesterol intake is from egg consumption.[1,27] ATP III TLC diet guidelines recommend that Americans consume an average of 200 mg or less of dietary cholesterol daily.[1] High consumption of dietary cholesterol has been associated with an increased serum cholesterol.[1,28,29] In addition, higher intake of dietary cholesterol has been associated with increases in LDL-C, which then increases the risk for CHD.[1] Decreasing the intake of dietary cholesterol will decrease LDL-C levels in most persons.[1] Examples of foods with high dietary cholesterol are listed in Table 1 .
Carbohydrates, Protein, and Fiber
As stated above, total fat intake should be limited to no more than 25%-35% of total daily calories.[1-3] ATP III further recommends that 50%-60% of total calories come from carbohydrate, 15% from protein, and a daily fiber intake of 20-30 grams.[1] Carbohydrate intake should consist mostly of foods rich in complex carbohydrates such as whole grains, fruits, and vegetables rather than high-carbohydrate foods rich in simple sugars.[3] Foods that contain high amounts of complex carbohydrates are usually low in calories and contain a wide variety of vitamins and minerals.[3] Examples of foods high in carbohydrates, proteins, and fiber are listed in Table 2 .
Plant Stanols/Sterols
ATP III recommends plant stanols and sterols for patients with high serum cholesterol because these plant components have been shown to reduce LDL-C levels.[1] Plant sterols are isolated from soybean and tall pine tree oils and can be esterified to unsaturated fatty acids to increase lipid solubility.[1] Plant sterols and stanols are considered to have similar efficacy.[1,30,31] Studies have shown that plant-derived stanols/sterol esters at dosages of 2-3 grams/day lower LDL-C levels by 6%-15% with a maximum LDL-lowering effect occurring with dosages of 2 grams/day.[1,31-37] Dietary consumption of plant stanols/sterols can be obtained from commercially available products containing plant sterols/stanols (i.e., margarines, juices).[1]
Weight Control
Persons who are overweight or obese have reached epidemic proportions in the United States.[38,39] More than 20% of adults 18 years and older have a body mass index (BMI) that classifies them as obese.[40] Obesity is associated with increasing the risk of hyperlipidemia, CHD, and metabolic syndrome, as well as with several other diseases such as hypertension, stroke, diabetes, osteoarthritis, sleep apnea, gout, gallbladder disease, and several types of cancers.[41]
ATP III guidelines recommend emphasizing the treatment of overweight and obesity in patients as part of their LDL-lowering therapy to decrease the patient's risk for CHD.[1] ATP III, however, states that treatment emphasis on weight reduction be delayed until sufficient dietary measures have been introduced as the primary means for lowering LDL-C.[1] This is recommended to avoid introducing too many new lifestyle changes to patients at one time, which may lead to decreased adherence and success. A 12-week trial of dietary changes to reduce LDL-C is recommended before introducing weight-loss interventions as a means reducing the risk for metabolic syndrome.[1]
ATP III recommendations for weight loss were taken from the National Heart, Lung, and Blood Institute's Clinical Guidelines on the Identification, Evaluation, and Treatment of Overweight and Obesity in Adults, published in 1998. Overweight and obesity are defined in these guidelines as persons as having BMIs of 25-29.9 and more than 30 kg/m2, respectively.[41]
The initial goal for weight loss therapy is to reduce body weight by approximately 10% over 6 months.[41] If the initial goal is accomplished and additional weight loss is needed, a new weight loss goal can be set. Once the patient has achieved the desired weight, a weight management program should be initiated to prevent gaining the weight back. Patients who are unsuccessful at achieving significant weight reduction should likewise be placed in weight management programs to prevent any further weight gain.[41]
Strategies for weight loss and weight maintenance should incorporate a multifactorial approach.[41] Weight loss interventions can include dietary therapy, physical activity, behavior therapy, and in selected patients, pharmacotherapy and weight loss surgery. The most successful therapy for weight loss and weight maintenance is a combination of a low-calorie diet, increased physical activity, and behavior therapy. The obesity guidelines recommend this combination therapy for at least 6 months before considering pharmacotherapy.[41]
Dietary Therapy
An individual dietary therapy plan should be designed specifically for each patient. A diet that creates a caloric deficit of 500 to 1,000 kcal per day (from the patient's current daily caloric intake) should be instituted as part of the weight loss program.[41] As previously mentioned, reducing the amount of total fat and saturated fat in the daily caloric intake is the priority because of its effectiveness in lowering LDL-C. Reducing the percentage of dietary fat alone will not promote weight loss unless a caloric deficit has occurred.[41] In addition, reducing dietary fats and carbohydrates is usually needed to create a sufficient caloric deficit to lose weight.[41]
Behavioral Therapy
Specific strategies for overcoming adherence barriers as well as reinforcing learned weight loss principles are important steps to success in weight loss and weight management programs. Some strategies may include keeping a personal log of both eating and exercise habits, stress management, food stimulus control, social support, and problem solving.[41] Program adherence issues are discussed in further detail below.
Physical Activity
Increasing physical activity benefits the musculoskeletal, cardiovascular, respiratory, and endocrine systems.[42] As a result, several health benefits result, including reduced risk of premature mortality, CHD, hypertension, diabetes mellitus, and colon cancer, among others.[42] In addition, regular physical activity reduces depression and anxiety, improves mood, and enhances the ability to perform daily tasks.[42,43]
Physical inactivity has been shown to be a major risk factor for CHD.[1,43,44] The prevalence of physical activity, including lifestyle activities, among adults in the United States for 2001 was reported by the Behavioral Risk Factor Surveillance System of the Centers for Disease Control (CDC).[45] The report found that in 2001 only 45.4% of U.S. adults regularly participated in moderate-intensity physical activity (e.g., vacuuming, gardening, brisk walking, or bicycling) or vigorous intensity physical activity (e.g., running, aerobics, or heavy yard work).[45] In addition, 26% of U.S. adults reported no leisure-time physical activities.[45]
Regular physical activity is emphasized in ATP III because of its importance in the management of metabolic syndrome.[1] ATP III guidelines recommend that physical activity be introduced to dyslipidemic patients when TLC therapy is initiated and to reinforce the concept when treatment emphasis shifts to management of metabolic syndrome.[1] Increasing regular physical activity has been shown to reduce LDL, very low-density lipoprotein cholesterol, and triglyceride levels, and increase high-density lipoprotein (HDL) cholesterol levels.[1,42,43,46] The purpose of promoting physical activity to patients with hypercholesterolemia is to promote energy balance to maintain a healthy body weight, reduce the risk for metabolic syndrome, and independently reduce the risk for CHD.[1,42,43,46]
Specific recommendations for physical activity are not outlined in ATP III, health care professionals are referred to the U.S. Surgeon General's report on physical activities published in 1996.[43] The general recommendations made in the report emphasize moderate-intensity physical activity for 30 minutes per day on most, if not all, days of the week.[43] The amount of physical activity rather than the intensity of the activity should be emphasized because sedentary people need to incorporate greater amounts physical activity into all aspects of daily life. Accumulating physical activity over the course of a day (i.e., walking 10 minutes at a time, several times a day) is also recommended as an effective alternative to a one-time exercise session and may even lead to greater exercise adherence.[42,43] Sedentary persons with preexisting disease, such as hypercholesterolemia, should start at less-intense exercise levels and gradually build up to the desired level of activity.[46] Table 3 lists ways to enhance physical activity for patients with dyslipidemias.
An exercise program designed specifically for patients with dyslipidemias should incorporate several factors. Activities should involve large muscle groups, be rhythmic and repetitive in nature, and be sustainable over long periods of time.[42,46] Such activities could include walking or jogging, bicycling, stair climbing, dancing, basketball, volleyball, lawn mowing, and gardening. Other factors to include in the exercise program are activity intensity, duration of exercise sessions, and number of exercise sessions per day or per week.[46] To optimize outcomes and adherence, exercises must be recommended that are specific to patients' individual needs.[46] An example of an exercise program for patients with dyslipidemias is listed in Table 4 .
Program Adherence
Improving patient adherence to lifestyle changes is similar to improving medication adherence. Both medication adherence and lifestyle change adherence are important and complex topics that should be addressed with patients when initiating therapy. Pharmacists and other health care professionals must understand that although therapeutic lifestyle changes clearly show many health benefits, the process of change is challenging for most patients. Dropout rates for patients participating in exercise programs have been shown to be highest during the first 3 months, with 1-year dropout rates at approximately 50%.[42] In addition, other health-related behavioral changes, such as medication adherence, smoking cessation, and weight reduction, also typically have a dropout rate of about 50%.[42]
The behavioral therapy literature offers a wealth of information specific to lifestyle change adherence. The principles discussed in the literature are based on addressing the need to overcome barriers to adherence as well as reinforcing new behaviors.[1,41] Identifying potential reasons for poor adherence in each individual patient is essential when designing a lifestyle change program. Knowing what may cause a patient to relapse to former behaviors is important information so that active steps can be taken to prevent these relapses. Common variables that may predict poor adherence include: lack of support from family, friends, and other health care providers; inclement weather; injury; lack of positive feedback or reinforcement; unrealistic goals; excessive costs; poor self-motivation; lack of knowledge about disease prevention and rehabilitation; lack of knowledge of the TLC program; and poor follow-up by health care providers.[1,41,47]
Several successful program adherence strategies were identified in the ATP III guidelines. A majority of the information comes from studies assessing weight management therapy. In general, the literature emphasizes the importance of baseline assessment of dietary therapy as well as self-monitoring techniques of both dietary and exercise habits to improve program adherence.[1] In addition, offering health information that is culturally sensitive and interactive and from reliable sources is important for improving adherence.[1] Table 5 lists key findings from behavioral studies that have demonstrated improved program adherence.
Discussion
ATP III guidelines and the 2004 ATP III update establish treatment strategies for attaining optimal LDL-C goals. The first treatment priority for patients with high blood cholesterol levels is to lower LDL-C.[1] The second treatment priority is to manage risk factors for metabolic syndrome and other lipid risk factors. Regardless of the patient's CHD risk level, ATP III guidelines and the 2004 ATP III update emphasize that all patients who are not at their target LDL-C goal must incorporate TLCs into their treatment strategies. Any person classified in the 2004 ATP III update as high risk or moderately high risk who has lifestyle related factors (e.g., obesity, physical inactivity, elevated triglyceride, low HDL-C, or metabolic syndrome) is a candidate for TLC to modify these factors regardless of LDL-C level.[2] The 2004 ATP III update reemphasizes the importance of the initial use of dietary therapy to reduce cholesterol, with the addition of weight loss and increased exercise for patients with metabolic syndrome and other lifestyle factors to decrease the risk for CHD.[1,2]
The concept of pharmaceutical care has been established for several years. Pharmaceutical care is defined as "a patient-centered, outcomes oriented pharmacy practice that requires the pharmacist to work in concert with the patient and the patient's other healthcare providers to promote health, to prevent disease, and to assess, monitor, initiate, and modify medication use to assure that drug therapy regimens are safe and effective."[48] Pharmacists tend to focus on the latter part of this definition rather than on disease prevention. Pharmacists are well positioned to provide TLC pharmaceutical care to patients with hyperlipidemia for both treatment of LDL-C and prevention of CHD. TLC therapy education can be offered to patients with hyperlipidemia simultaneously with drug therapy counseling. This can be incorporated in both the community- and hospital-based pharmacy practices. Patients who obtain TLC therapy information from a variety of health care providers, including pharmacists, may be more likely to adhere to prescribed TLC programs that may lead to a greater reduction in CHD risk.[1]
In the community setting, pharmacists have long been known to be among the most accessible health care providers. The large numbers, broad distribution, and extended hours of operation in community pharmacy allow patients to have high access to pharmacists.[49] In rural areas, pharmacists have been shown to be more accessible than primary care physicians (66.4 versus 53.6, respectively, per 100,000 population) as well as in health professional shortage areas (37 versus 4.2, respectively, per 100,000 population).[49] As a result, community pharmacists can be an excellent resource on lifestyle therapy information for patients with hyperlipidemia, especially in rural and health professional shortage areas.
In March 2000, a study conducted by the American Pharmacists Association Foundation assessing pharmaceutical care services in patients with hyperlipidemia was published (Project ImPACT: Hyperlipidemia).[50] A total of 26 community-based ambulatory care pharmacies from various settings in 12 states participated in the 24-month study. The objective of the study was to demonstrate whether pharmacists, in collaboration with patients and physicians, could have an impact on patient persistence and adherence with antihyperlipidemic drug therapy and enable patients to reach their NCEP goals.[50]
Patient participants (n = 397) in the study were seen by the pharmacist at the initiation of the study, monthly for the first 3 months, and quarterly thereafter. Pharmacist interventions with physicians were made regarding optimizing drug therapy that focused on achieving NCEP goals. The results of the study showed that medication persistence and adherence were 93.6% and 90.1%, respectively. In addition, total cholesterol, triglyceride, HDL-C, and LDL-C levels improved significantly from baseline, and 63.5% of patients reached their NCEP goals at the end of the project. The authors of the project concluded that pharmacists, working collaboratively with patients, physicians, and other health care providers, can provide an advanced level of care resulting in successful dyslipidemia management.[50] In addition to focusing on dyslipidemia drug therapy, added emphasis from pharmacists on TLC therapy may enhance NCEP goal achievement and further reduce the risks for CHD.
Conclusion
Therapeutic lifestyle change components are at the core of treatment interventions for patients with dyslipidemia. ATP III guidelines and the 2004 ATP III update clearly state that individuals who do not meet their LDL-C goals must participate in TLC therapy regardless of drug therapy. Pharmacists are very effective managers of medications of patients with dyslipidemia. Likewise, pharmacists are in an excellent position, and may be more accessible than other health professionals, to talk with these patients about TLC therapy given the proper education on dietary counseling, weight management, and exercise programming. Pharmacists should always work in conjunction with the patients and other health care providers, such as physicians, dietitians, nurses, and exercise specialists whenever possible to ensure the highest possible success rate for the patient.
Reprint Address
Thomas L. Lenz, PharmD, MA, Department of Pharmacy Practice, Creighton University Medical Center, 2500 California Plaza, Omaha, NE 68178. Fax: 402-280-3022. E-mail: tlenz@creighton.edu .
J Am Pharm Assoc. 2005;45(4):492-502. © 2005 American Pharmacists Association
Top
From Medscape Rheumatology > Ask the Experts > Rheumatoid Arthritis and Related Conditions
Gout: Treatment Reaction to Allopurinol -- What Next?
Arthur Kavanaugh, MD
Authors and Disclosures
Posted: 11/08/2007
Question
A rash developed on a patient with gout during treatment with both allopurinol and sulfinpyrazone. Desensitization with allopurinol was not successful. The patient is intolerant of nonsteroidal anti-inflammatory drugs and colchicine. What is the next treatment option, and can anakinra be tried?
Amir Alvi, MBBS, MRCP
Response From Expert
Arthur Kavanaugh, MD
Professor, Department of Rheumatology, University of California, San Diego
This question addresses treatment options for gout. The case described highlights the unmet clinical need for new gout treatments. In this case, the best available therapy, allopurinol, cannot be tolerated because of toxicity, and even desensitization was not successful. In addition, uricosuric therapy was unsuccessful, as it is in many cases of gout. Unfortunately, other treatment options are relatively limited. To lower uric acid, removing contributing factors such as medications, where possible, can be useful. Alteration in diet can have a beneficial effect, but compliance can be difficult for many patients. Other medications with uric acid-lowering properties, such as losartan and fenofibrate, can be tried if there are no contraindications. A drug currently in late-phase development, febuxostat, is a nonpurine xanthine oxidase inhibitor that has shown promise in clinical trials but has not yet been approved for gout.
Nonsteroidal anti-inflammatory drugs (NSAIDs) are commonly used for the treatment of chronic arthritis. Comorbid conditions commonly found among patients with gout, including hypertension and renal dysfunction, which in some cases contribute to the disease, can limit the use of NSAIDs in patients with gout, however. The questioner also raises the possibility of using anakinra, an interleukin-1 (IL-1) inhibitor. There has indeed been a good deal of interest in this approach. There are data to suggest that uric acid crystals may drive inflammation, in part through the 'inflammasome,' an inflammatory intracellular pathway that acts partly through elaboration of IL-1. There is anecdotal evidence suggesting that IL-1 inhibitors may indeed be effective in gouty arthritis, and this approach is currently under study. However, the same caution must be used with anakinra as one would undertake when using it for its approved indication of rheumatoid arthritis.
Top
From Pharmacotherapy
Urate-Lowering Therapy for Gout: Focus on Febuxostat
Bryan L. Love, Pharm.D.; Robert Barrons, Pharm.D.; Angie Veverka, Pharm.D.; K. Matthew Snider, Pharm.D.
Authors and Disclosures
Posted: 08/24/2010; Pharmacotherapy. 2010;30(6):594-608. © 2010 Pharmacotherapy Publications
Urate-Lowering Therapy for Gout: Focus on Febuxostat
Abstract and Introduction
Abstract
Gout is a common, painful, and often debilitating rheumatologic disorder that remains one of the few arthritic conditions that can be diagnosed with certainty and cured with appropriate therapy. Allopurinol is the most frequently prescribed agent for gout in the United States. Unfortunately, most patients treated with allopurinol do not achieve target serum uric acid (sUA) levels, possibly due to a perceived intolerability to allopurinol in doses above 300 mg and the need for reduced doses in patients with renal insufficiency. Febuxostat, an orally administered, nonpurine inhibitor of xanthine oxidase, was recently approved by the U.S. Food and Drug administration for chronic management of hyperuricemia in patients with gout. Patients treated with febuxostat achieve rapid and substantial reductions in sUA levels. Compared with allopurinol-treated patients, patients receiving febuxostat 80 mg/day were more likely to achieve sUA concentrations less than 6 mg/dl. In longterm studies (up to 5 yrs), febuxostat demonstrated sustained reductions in sUA levels, nearly complete elimination of gout flares, and a frequency of adverse effects comparable to allopurinol. The most commonly reported adverse effects were liver function abnormalities, rash, nausea, and arthralgias. The recommended starting dose of febuxostat is 40 mg/day, which may be increased to 80 mg/day after 2 weeks if patients do not achieve sUA levels less than 6 mg/dl. Dosage adjustment in mild-to-moderate renal insufficiency is unnecessary; however, data are lacking on the safety of febuxostat in patients with severe renal impairment. Although more costly than allopurinol, febuxostat appears to be an acceptable alternative for the treatment of gout and hyperuricemia, and may be advantageous in patients with renal impairment, intolerance to allopurinol, or the inability to attain sUA levels less than 6 mg/dl despite adequate therapy with available agents.
Introduction
Gout is a common metabolic disorder that was once only associated with members of high society due to diets of rich food and wine. Men have a 3-fold greater risk of developing gout compared with women, although this difference decreases with age.[1] In addition, gout exhibits a strong racial disparity with a cumulative incidence that may be twice as likely in African-American men compared with Caucasian men.[2] Evidence indicates that the incidence of gout has increased from 45 to 63.3/100,000 patients when comparing data from 1977–1978 with data from 1995–1996.[1] Similarly, the prevalence of gouty arthritis has increased from 2.9 to 5.2 cases/1000 patients from 1990 to 1999 based on data from a managed care population.[3] More recently, gout was noted as the primary diagnosis in 3% of 36.5 million clinic visits for arthritis and other rheumatic conditions.[4] Presumably these increases are due at least in part to increasing rates of obesity and use of thiazide diuretics as a first-line treatment for hypertension.[5]
Febuxostat, a nonpurine inhibitor of xanthine oxidase, was recently approved by the United States Food and Drug Administration (FDA) for chronic management of hyperuricemia in patients with gout. Early in vitro studies demonstrated potent and selective inhibition of xanthine oxidase and xanthine dehydrogenase. In subsequent clinical studies this translated to substantial reductions in serum uric acid levels and improvements in the overall management of gout.
Pathophysiology of Gout
Uric acid, the end product of purine metabolism, is considered a waste product and serves no known physiologic purpose in humans. Unlike other mammals, humans lack the enzyme urate oxidase (uricase), which further degrades uric acid to allantoin, a substance that is more water soluble and more readily excreted by the kidneys.[6] Excessive uric acid accumulation contributes to hyperuricemia, defined as an elevated serum uric acid (sUA) level above 7 mg/dl in men and above 6 mg/dl in women, and is considered to be the principle cause of gouty arthritis. The term gout or gouty arthritis describes the acute and chronic clinical manifestations caused by the deposition of monosodium urate crystals in articular and extraarticular tissues.
The level of hyperuricemia determines the formation of monosodium urate crystals and the development of gouty complications. Normal sUA levels in men (? 7 mg/dl) and women (? 6 mg/dl) are already close to the limits of urate solubility (6.8 mg/dl at 37°C).[6] The greater sUA levels exceed their plasma saturation point, the greater the likelihood of acute gouty arthritic attacks.[7] Serum uric acid levels in excess of 10 mg/dl promote ubiquitous formation of tophaceous deposits that may present 10 years from the initial gouty attack.[7, 8] Not all patients with hyperuricemia will develop acute gout flares or chronic gouty complications.[9] Conversly, acute flares of gouty arthritis can occur in the presence of normal uric acid levels.[10] The characteristic clinical presentation of an acute gouty arthritis flare includes severe localized pain, joint inflammation, warmth, and redness.[11] However, the clinical presentation may differ in elderly patients and may be confused with rheumatoid arthritis due to more diffuse symptoms at presentation.[12, 13] According to the American College of Rheumatology preliminary criteria for the diagnosis of primary gout,[14] patients must have monosodium urate crystals in the synovial fluid and/or tophi confirmed with crystal examination. In addition, patients must have at least six of the following findings:
Asymmetric swelling within a joint on a radiograph
First metatarsophalangeal joint is tender or swollen
Hyperuricemia
Maximal inflammation developed within 1 day
Monoarthritis attack
More than one acute arthritis attack
Redness observed over joints
Subcortical cysts without erosions on a radiograph
Suspected tophi
Synovial fluid culture negative for organisms during an acute attack
Unilateral first metatarsophalangeal joint attack
Unilateral tarsal joint attack
There are many known causes of gout, although they can generally be classified by overproduction or inadequate excretion of uric acid. Increased dietary intake of purines directly leads to elevations in sUA levels; conversely, studies in healthy men have demonstrated purine-free diets can reduce sUA levels significantly in a matter of days.[15] In the United States, higher levels of meat and seafood consumption are directly associated with higher urate levels.[5] However, increased consumption of purine-rich vegetables (e.g., beans, lentils, peas, spinach) are not directly associated with an increased risk of gout.[16] Although a link between alcohol consumption and gout has been suspected since ancient times, there has been little prospective evidence of a direct connection. A more recent examination of the issue confirms a strong relationship between the amount of alcohol consumed and the presence of gout. A linear increase in the relative risk of gout was seen in patients drinking more than 15 g/day, with the highest risk in patients drinking more than 50 g/day. Of interest, the relationship was highest for beer and ethanol spirits, whereas wine was not associated with an increased occurrence of gout.[17] Drugs, particularly thiazide diuretics, are frequently associated with gout.[11] When gout is associated with diuretics, discontinuation of the diuretic is recommended, with a switch to an alternative regimen when possible.[18] Causes of gout, including drugs commonly associated with gout, are summarized in Table 1.[11, 19]
Whereas the pain of acute gout flares can be considerable, the burden of hyperuricemia is not limited to just gouty arthritis. Recent research has identified hyperuricemia as an independent risk factor for metabolic syndrome, coronary artery disease, and chronic kidney disease.[20–24] Whether hyperuricemia is a modifiable risk factor for these conditions is debated but may be answered by ongoing research.
Treatment Strategies for Gout
Management of gout initially involves treatment of an acute gouty arthritis flare. The choice of therapy for acute gout includes colchicine, non-steroidal antiinflammatory drugs (NSAIDs), or a corticosteroid agent. Therapeutic selection should be based on the evidence of drug efficacy, anticipated tolerability, and known contraindications.
Barring contraindications, early administration of maximum tolerable doses of NSAIDs remains the treatment of choice.[18] This recommendation is based on numerous studies, both placebo controlled and comparative, documenting safe and successful use of NSAIDs for the treatment of gouty flares.[25] In recent trials, oral corticosteroids have demonstrated similar efficacy and tolerability to NSAIDs in acute gouty arthritis flares,[26, 27] although only one study was designed to demonstrate noninferiority.[26] Rapid corticosteroid withdrawal has been associated with rebound gout attacks.[25] To prevent this occurrence, oral corticosteroids should be tapered for 1–2 weeks after resolution of gouty arthritic symptoms.[25] Based on a single randomized, placebo-controlled trial, oral colchicine provided relief of pain (number needed to treat = 3) associated with acute gout if started within 48 hours of onset.[18] However, 50–80% of patients receiving colchicine may experience gastrointestinal adverse effects before relief of symptoms.[28]
Colchicine and NSAIDs may be contraindicated in patients with severe hepatic or renal disease, whereas oral corticosteroids and NSAIDs should be avoided in patients with acute heart failure or acute gastrointestinal bleeding. In patients receiving either NSAIDs or corticosteroids who are at increased risk of bleeding from peptic ulcers, gastroprotective measures should be followed.[19] In patients with known contraindications to NSAIDs, colchicine, or oral corticosteroids, local intraarticular administration of corticosteroids may benefit those with severe monoarticular gouty attacks.[18, 25, 28]
Management of gout also involves lowering serum urate levels to prevent acute gouty arthritis and tophaceous deposits, and to treat preexisting tophi and their complications. These outcomes can be improved in patients who reduce their sUA levels to 6 mg/dl or lower.[7] However, acute gouty arthritis occurs infrequently, with long durations between attacks. Subsequent gouty arthritic episodes are estimated to occur in 62% of patients after 1 year, 16% of patients in 1–2 years, 11% of patients after 2–5 years, and 6% of patients after 5–10 years.[25] Similarly, the development of tophi are uncommon and may occur after many years of hyperuricemia.[8] Consequently, urate-lowering therapy may not be warranted in all patients. Initiation of uratelowering therapy is most appropriate in patients with severe established gout with tophi, radiographic evidence of joint damage, and urate nephrolithiasis.[18] In addition, urate-lowering therapy has demonstrated a cost savings in patients who experience gouty attacks at least twice/year.[29] Otherwise, the decision to start urate-lowering therapy requires individual patient risk-benefit assessment that includes the role of diet, drug therapy, weight management, and alcohol consumption in promoting hyperuricemia.[18]
Whenever urate-lowering therapy is started, acute reductions in sUA concentrations can destabilize synovial microtophi, potentially promoting a recurrent gouty flare. Consequently, urate-lowering therapy should be started 6–8 weeks after an acute gout attack.[25] In addition, prophylaxis with oral colchicine 0.5–1 mg/day should be given with any hypouricemic therapy for least 6 months.[30] During this time, colchicine may reduce subclinical inflammation associated with remnant urate crystals in the synovium. Once serum urate concentrations are within normal range, and the patient has been without a gouty flare for 3–6 months, prophylaxis with colchicine may be discontinued.[30, 31] Evidence for prophylaxis of acute gouty flares with NSAIDs is limited and recommended only after colchicine failure or intolerance.[18, 19]
Urate-lowering therapy can be categorized as uricosuric agents, which increase renal excretion of urate; uricostatic agents, which decrease urate formation; and uricolytic agents, which promote urate degradation (Figure 1). The choice of hypouricemic agent depends on patient comorbidities, drug tolerance, cost, and cause of the hyperuricemia.[25]
Click to zoom
(Enlarge Image)
Figure 1.
The pathophysiology of gout. The three categories of urate-lowering drugs act at different steps of the urate formation and degradation pathway. Uricosuric agents increase renal excretion of urate, uricostatic agents decrease urate formation, and uricolytic agents promote urate degradation. GI = gastrointestinal.
Uricosuric drugs directly treat the principle cause of hyperuricemia—renal urate underexcretion—in 90% of patients.[31] Ironically, the mechanism of uricosuria may elicit drug-induced urate nephrolithiasis or nephropathy. These conditions are minimized with gradual dose increases, high urine output (> 1500 ml/day), urine alkalinization, and avoidance of uricosurics in patients with a history of gouty renal complications.[25, 32] Although probenecid, sulfinpyrazone, and benzobromarone are among the three uricosuric agents marketed in Europe, only probenecid is available in the United States. Both probenecid and sulfinpyrazone are effective in reducing the frequency of gouty flares; however, both agents demonstrated slightly less sUA lowering ability than allopurinol.[18] In the United States, less than 5% of patients with gout receive probenecid.[33] Probenecid has a tolerable adverse-effect profile but displays a loss of hypouricemia efficacy in patients with a creatinine clearance (Clcr) less than 60 ml/minute.[32] Sulfinpyrazone can inhibit platelet function, increasing the risk of gastric bleeding, and has been rarely associated with bone marrow suppression.[32] The most potent of uricosuric agents, benzobromarone, possesses efficacy in patients with a Clcr above 25 ml/minute. However, concerns of potential fatal hepatotoxicity in patients receiving high-dose benzobromarone have restricted its use.[32]
Uricolytic agents provide the enzyme uricase to promote formation of allantoin and lower urate levels. A recombinant form of uricase, rasburicase, is indicated for management of expectant hyperuricemia in pediatric patients with leukemia, lymphomas, or solid organ tumors likely to experience tumor lysis syndrome.[34] Among the most potent of urate-lowering therapy, rasburicase has been shown to lower sUA rapidly and profoundly, as well as debulk tophi.[32] However, uricase agents, like rasburicase, must be administered intravenously, and they exhibit a duration of activity requiring biweekly or monthly injections. Repeated injections of uricase agents have resulted in formation of antiurate oxidase antibodies in 7–14% of patients.[25] With continued therapy, these antibodies may lead to diminished efficacy. In addition, long-term assessment of risks and benefits of uricase agents is limited.[35] With antigenicity, intravenous administration, cost, and insufficient long-term study, the role of uricase agents for the treatment of chronic gout is unclear.
Allopurinol, a uricostatic agent, is the most commonly prescribed drug for the management of gout.[33] Acceptance of allopurinol can be attributed to convenient once-daily dosing (at doses ? 300 mg/day), low cost, and its safety and efficacy regardless of hyperuricemic origin. In addition, allopurinol possesses a predictable sUA-lowering effect of 1 mg/dl for every 100-mg incremental dose.[18] This allows clinicians to titrate allopurinol to therapeutic goals with a dose ranging from 100–800 mg/day. Unfortunately, in the United States, 95% of patients with gout receive doses of allopurinol that do not exceed 300 mg/day,[33] despite less than 50% of patients achieving sUA levels of less than 6 mg/dl.[36] Physician and patient reluctance to titrate allopurinol doses to goal sUA concentrations stems from several concerns. First, treatment-induced gout flares secondary to urate-lowering effects of allopurinol may result in noncompliance and therapy failure. In such patients, delayed administration after an acute gout attack, and prophylaxis with colchicine or NSAIDs can prevent these flares.[19] Second, although mild, reversible adverse effects (e.g., gastrointestinal intolerance, skin rash) occur in up to 20% of patients, 5% are unable to tolerate therapy.[32] Clinicians are apprehensive of potentially life-threatening allopurinol hypersensitivity syndrome, a rare adverse event that inflicts 0.4% of patients.[37] Apparently dose dependent, this syndrome manifests with fever, eosinophilia, dermatitis, hepatic dysfunction, and renal failure, and the associated mortality rate is approximately 20%.[6] Allopurinol hypersensitivity syndrome and severe cutaneous reactions may be linked to accumulation of a metabolite, oxypurinol, which occurs more readily in patients with renal insufficiency.[25] Guidelines for dosing allopurinol in patients with renal impairment may further contribute to insufficient control of hyperuricemia and gout.[36]
In patients receiving allopurinol, failure to achieve goal sUA concentrations, whether from actual or perceived intolerance, provides opportunity for alternative urate-lowering therapy. In the United States, the only available uricosuric agent, probenecid, has reduced efficacy in patients with any level of renal insufficiency. Uricolytic agents are burdened with intravenous administration, high cost, and inadequate longterm study for chronic gout. An ideal urate-lowering drug should possess predictable sUA-lowering effects regardless of hyperuricemia etiology, prove safe and effective in patients with renal insufficiency, and provide good tolerability and once-daily oral administration conducive to management of chronic gout.
The pathophysiology of gout.
Febuxostat
Pharmacology
Febuxostat lowers sUA concentrations by interfering with purine catabolism, specifically the oxidation of hypoxanthine to xanthine and xanthine to uric acid at a molybdenum-pterin center. Although febuxostat is an inhibitor of xanthine oxidase, it is structurally quite different from allopurinol (Figure 2), has an alternate mechanism of enzyme inhibition, and is more potent.[38] Unlike allopurinol, which undergoes oxidation to the active metabolite oxypurinol and interacts chemically with the molybdenum center of xanthine oxidase, febuxostat remains unchanged and inhibits xanthine oxidase by binding in a narrow channel leading to the molybdenum center of the enzyme.[38] By this mechanism, febuxostat is able to inhibit both the reduced and oxidized form of xanthine oxidase to produce sustained reductions in sUA levels.[38] In terms of potency, febuxostat has an in vitro inhibition constant (Ki) of less than 1 nM, which is comparable to oxypurinol (Ki = 0.5 nM). However, the ability of febuxostat to inhibit both forms of the xanthine oxidase enzyme is an advantage since oxypurinol binds only weakly to the oxidized form and can be displaced during reoxidation of the molybdenum cofactor. This reaction can occur within hours and may only be overcome with multiple daily doses of allopurinol.[38, 39] In addition, because febuxostat is structurally unrelated to purine or pyrimidines, it does not interfere with other enzymes in these metabolic pathways but is selective for xanthine oxidase.[39]
Click to zoom
(Enlarge Image)
Figure 2.
Chemical structures of febuxostat and allopurinol.
Pharmacokinetics and Pharmacodynamics
When administered orally, febuxostat is absorbed rapidly, reaching maximum concentrations in plasma (Cmax) within 1 hour.[40, 41] The absolute bioavailability of febuxostat is unknown, but absorption approximates 84%.[42, 43] Febuxostat is highly protein bound, about 99% to albumin, has a moderate volume of distribution of 0.7 L/kg, and a half-life of 5–8 hours.[41, 43–45] The primary method of clearance is hepatic. Approximately 22–44% of a dose undergoes conjugation by uridine diphosphate-glucuronyl transferase (UDPGT) enzymes to produce the acyl-glucuronide metabolite. Up to 8% of a given dose undergoes oxidation by cytochrome P450 (CYP) 1A2, 2C8, and 2C9 to produce the active metabolites 67M-1, 67M-2, and 67M-4. Only 1–6% of the drug is excreted unchanged in the urine.[40, 41, 43, 46, 47]
Febuxostat displays linear pharmacokinetics as evidenced by proportional increases in Cmax across a dose range of 10–120 mg, and area under the plasma concentration–time curve (AUC) for doses ranging from 10–240 mg. Dose-escalation studies in healthy volunteers demonstrated a 25–76% reduction in sUA levels with incremental dose increases of febuxostat up to 120 mg, at which point the pharmacodynamic effects plateaued.[40, 41] In patients with gout and/or hyperuricemia, febuxostat 20 mg once/day produced a sustained hypouricemic effect, decreasing the mean AUC of sUA levels from 8.7 mg/100 ml to 5.8 mg/100 ml after 4 weeks of therapy, with maximum and minimum sUA levels in the 24 hours after final dose administration differing by less than 1 mg/100 ml.[45]
Special Populations The pharmacokinetic and pharmacodynamic effects of febuxostat have been evaluated across sexes, in elderly patients, and in patients with renal or hepatic dysfunction. One group of researchers evaluated the pharmacokinetic parameters and sUA level lowering of febuxostat 80 mg once/day for 7 days in men compared with women and in patients aged 18–40 years compared with those aged 65 years or older.48 Compared with males, females were identified as having higher Cmax and AUC of unbound drug (31.5 vs 23.6 ng/ml, p?0.01, and 62.8 vs 53.9 ng•hr/ml, p?0.05, respectively) as well as higher percent decreases in mean sUA concentrations (59% vs 52%, p?0.01). This difference was not considered to be clinically significant however and was most likely related to weight differences between the sexes. Patients aged 65 years or older had comparable Cmax, AUC, and percent decreases of sUA levels compared with the younger patients. Based on these data, dosage adjustments are not required based on age or sex.
Two studies in patients with renal impairment confirmed that dosage adjustments of febuxostat are unnecessary with mild-to-moderate renal dysfunction.[46, 47] The effects of a single dose of febuxostat 20 mg on renal function were assessed.[46] The mean AUC of unchanged febuxostat was similar in patients with normal (Clcr ? 80 ml/min) or mild (50 ? Clcr < 80 ml/min) renal impairment. Although patients with moderate renal impairment (30 ? Clcr < 50 ml/min) had a higher AUC of unchanged drug, the difference was less than 2-fold and was not considered clinically significant. Despite mean changes in plasma uric acid concentrations being higher in the renally impaired groups, these changes did not differ significantly from those with normal renal function.
Similar results were reported when febuxostat 80 mg/day administered for 7 days was evaluated in patients with normal renal function (Clcr > 80 ml/min), and mild (Clcr 50–80 ml/min), moderate (Clcr 30–49 ml/min) or severe (Clcr 10–29 ml/min) renal dysfunction.[47] On day 7, there was no statistically significant difference in Cmax among the four groups, but unbound AUC and half-life of febuxostat and its metabolites (67M-1, 67M-2, and 67M-4) were significantly increased in a linear relationship with Clcr. Again, this did not translate into significant increases in percent reductions of plasma or urinary uric acid concentrations among groups. Based on these pharmacokinetic data, the authors concluded that more conjugated drug underwent enterohepatic recycling and excretion through the biliary route in the patients with renal dysfunction. At this time, dosage adjustments of febuxostat in patients with renal impairment are not recommended; however, the two studies had only a combined total of 47 patients, and febuxostat has not been studied in patients with end-stage renal disease.
The effect of hepatic impairment on febuxostat was evaluated in one study.[44] Febuxostat 80 mg once/day was administered for 7 days to patients with normal hepatic function and those with mild (Child-Pugh class A) or moderate (Child-Pugh class B) hepatic dysfunction. Patients with mild or moderate hepatic impairment had a higher unbound Cmax, unbound AUC, and half-life than patients with normal hepatic function, but the differences were not statistically significant. Although percent decreases in sUA levels were significantly lower in patients with hepatic dysfunction (48.9% for mild and 47.8% for moderate dysfunction) compared with those with normal hepatic function (62.5%, p?0.005 for both comparisons), this 14–15% difference was not considered to be clinically significant since the absolute reductions observed in patients with hepatic impairment were still comparable to reductions observed in previous studies of healthy subjects. Based on these data, febuxostat does not appear to need dosage adjustments in patients with mild-to-moderate hepatic impairment (Child-Pugh classes A and B). This is most likely due to compensatory changes in renal excretion of unchanged drug and glucuronide conjugates as oxidative metabolism and biliary excretion decreases. To our knowledge, febuxostat has not been studied in patients with severe hepatic impairment.
Clinical Efficacy
Short-term Studies In a multicenter, phase II, randomized, doubleblind, placebo-controlled, dose-response clinical trial, the safety and efficacy of febuxostat once/day were examined.[49] A total of 153 patients (aged 23–80 yrs) were randomized to febuxostat 40, 80, or 120 mg/day or placebo. Of these patients, 13 were excluded because baseline sUA levels were measured outside the desired time frame; thus, 140 patients were included in the intent-to-treat efficacy analysis. All of the original 153 randomized patients were included in the analyses of treatment safety and gout flares because all received at least one dose of the study drug. There were no significant differences in baseline characteristics; specifically, mean sUA levels were similar between groups and ranged from 9.24–9.92 mg/dl. During a 2-week washout period and throughout the first 2 weeks of the study, prophylaxis with colchicine 0.6 mg twice/day was provided. If the participants experienced acute gout flares after the initial 2 weeks, further treatment was determined by the investigators.
The primary outcome measure was the proportion of patients in each treatment group with sUA levels less than 6 mg/dl on day 28. Secondary measures included the proportion of patients with sUA levels less than 6 mg/dl on days 7, 14, and 21, the percent reduction in sUA level from baseline at each visit, and the percent reduction in daily urinary uric acid excretion from baseline to day 28.
Treatment with febuxostat resulted in prompt and persistent lowering of sUA levels. A significantly greater proportion of patients in the febuxostat groups achieved sUA levels less than 6 mg/dl at each visit compared with those in the placebo group (p<0.001 for each comparison). The majority of patients in each febuxostat group reached and maintained the targeted sUA concentration (< 6 mg/dl), some as early as day 7. The efficacy of febuxostat in lowering sUA was further demonstrated by a significantly greater proportion of febuxostat-treated patients achieving a sUA level of less than 4 mg/dl or less than 5.0 mg/dl compared with those receiving placebo on day 28 (p<0.05 for each comparison). It was later determined that participants with the highest baseline sUA levels were less likely to reach a sUA level of less than 6 mg/dl with febuxostat 40 mg/day compared with either 80 or 120 mg/day (p value not reported).
Febuxostat has been compared with allopurinol in two published and one unpublished phase III studies. The Febuxostat versus Allopurinol Controlled Trial (FACT )[50] and the Allopurinol- and Placebo-Controlled Efficacy Study of Febuxostat (APEX)[51] were conducted over 52 weeks and 28 weeks, respectively. In FACT, 760 patients from 112 centers in the United States and Canada were enrolled and received treatment,[50] whereas APEX enrolled 1072 patients in the United States only.[51] The Confirmation of Febuxostat in Reducing and Maintaining Serum Urate (CONFIRMS) trial, which is not yet published, enrolled 2268 patients and was conducted primarily to address cardiovascular safety concerns identified in FACT and APEX.[52] Eligible patients for all three studies met the criteria for acute gouty arthritis as determined by the American College of Rheumatology[14] and were adults with sUA levels of 8 mg/dl or higher. One major difference in the studies was the inclusion of patients with renal insufficiency. Patients in FACT were excluded for elevated serum creatinine concentrations of 1.5 mg/dl or greater, or estimated creatinine clearances less than 50 ml/minute; however, the other studies included patients with moderate renal impairment defined as a serum creatinine concentration of 1.5–2.0 mg/dl (APEX) and estimated Clcr 30–59 ml/min (CONFIRMS).[50–52]
After a 2-week washout period, patients in APEX or FACT were randomly assigned to receive febuxostat 80 or 120 mg/day or allopurinol 300 mg/day. The APEX also included placebo and febuxostat 240-mg/day groups, and patients with mild renal impairment received a reduced dose of allopurinol 100 mg/day. Prophylaxis against acute gout flares with naproxen 250 mg twice/day or colchicine 0.6 mg once/day was administered to all patients during the washout period and the first 2 months of study treatment. Gout flares during the study were treated at the discretion of individual investigators.
The primary efficacy outcome measure for both trials was the proportion of patients who achieved and maintained sUA levels less than 6 mg/dl during the last three monthly measurements. Secondary outcomes included the proportion of patients at each visit with sUA levels less than 6 mg/dl, and percent reduction from baseline in sUA concentrations. In addition, the reduction in tophi size and number, and the proportion of patients requiring treatment for acute gout flares during weeks 9 through study completion were assessed.
Baseline characteristics of the treatment groups were very similar with no significant differences reported between groups in FACT or APEX. For both trials, 92–97% of patients were male, 75–81% were Caucasian, and the mean age was 51–55 years. Patients had gout for an average of 10–12 years and had an average sUA level of 9.8 mg/dl.
In FACT, more patients receiving febuxostat achieved sUA levels less than 6 mg/dl than those receiving allopurinol (p<0.001).[50] The difference in the percentages of patients reaching the target SUA level persisted in favor of febuxostat regardless of the initial sUA level (p<0.001). Significantly more patients receiving febuxostat were more likely to have sUA levels less than 6 mg/dl by week 2 compared with the allopurinol group. In addition, these differences were maintained at all visits through week 52 (p<0.001). Considerably more patients with higher baseline sUA levels (? 9 mg/dl) receiving either dose of febuxostat achieved a sUA level less than 6 mg/dl at the last three visits when compared with patients receiving allopurinol (p<0.001). The mean percent reduction of sUA level from baseline was greater in both febuxostat groups than in the allopurinol group (p<0.001).
In APEX, participants receiving febuxostat 80, 120, or 240 mg/day achieved sUA levels less than 6 mg/dl at the last three visits significantly more often than those receiving allopurinol or placebo (48%, 65%, 69%, 22%, and 0% respectively, p<0.001 for all comparisons).[51] Patients with moderate renal impairment receiving febuxostat 80, 120, and 240 mg/day achieved the goal sUA level more often than patients receiving allopurinol 100 mg or placebo (44%, 46%, 60%, 0%, and 0%, respectively; p values not reported). These data are promising for patients with renal impairment. However, as the numbers of patients with renal impairment in APEX (35 patients) and in extension studies were small, febuxostat use in this population deserves further study.
Efficacy from the CONFIRMS trial was assessed similarly to APEX and FACT. The primary efficacy outcome measure was the percentage of patients with a sUA level less than 6 mg/dl at the final visit (6 mo).[52] Patients were randomized to receive allopurinol 300 mg/day or febuxostat 40 or 80 mg/day; patients with moderate renal insufficiency (not defined) who were randomized to receive allopurinol were dosed at 200 mg/day. Participants were equally matched with regard to baseline demographic and clinical characteristics such as sUA level, cardiovascular disease, and renal impairment. At the conclusion of the trial, 45% and 67% of patients receiving febuxostat 40 and 80 mg/day, and 42% of allopurinol-treated patients achieved sUA levels less than 6 mg/dl. Febuxostat 40 mg/day was determined to be noninferior to allopurinol; however, a statistically significantly benefit was apparent when comparing febuxostat 80 mg/day and allopurinol. Table 2 summarizes the results of these shortterm clinical trials (? 1-yr duration) that evaluated the efficacy of febuxostat for hyperuricemia in patients with gout.[49–52]
Long-term Studies Two long-term clinical studies have been conducted in an effort to determine the durability of urate lowering, safety, and tolerability of febuxostat. Both were open-label extension studies of previously published clinical trials. The Febuxostat Open-label Clinical Trial of Urate-Lowering Efficacy and Safety (FOCUS)[53] was a 5-year extension of the phase II trial[49] (discussed previously). A total of 116 patients continued treatment with febuxostat 80 mg/day. Dosage titrations to 40 mg/day, 80 mg/day, or 120 mg/day were allowed within the first 24 weeks, but a stable dosage was maintained from this point until study completion. Colchicine 0.6 mg twice/day was provided during the first 4 weeks of this study. Similar to previous studies, the primary outcome measure was the proportion of participants who achieved and maintained a sUA level less than 6 mg/dl. The secondary outcome was the overall percent reduction in sUA level from baseline. In addition, the proportion of patients with sUA levels less than 5 or less than 4 mg/dl, the proportion requiring treatment for a gout flare, and the proportion with resolution of palpable tophi were examined.[53]
The majority of patients were Caucasian (85%) and male (91%), with a mean age of 53.3 years. Baseline characteristics from the initial 28-day, phase II study were used for this study. Palpable tophi were present in 22% of participants. The majority of patients continued to take febuxostat 80 mg/day during the study. The numbers of patients who discontinued treatment were 38, 7, 5, 6, and 2 in years 1–5, respectively. The primary reasons for discontinuation were listed as personal reasons (22 patients), adverse events (13), gout flare (8), lost to follow-up (5), protocol violation (1), and other (9). A total of 58 patients completed the study, and of these patients, 54 (93%) met the goal sUA level. The proportion of patients achieving a sUA level less than 6 mg/dl at any febuxostat dose during years 1–4 were 78%, 76%, 84%, and 90%, respectively. The mean percent reduction in sUA levels in patients receiving febuxostat for 2 years or longer was nearly 50% from baseline.
The Febuxostat/Allopurinol Comparative Extension Long-term (EXCEL) study[54] included patients who had completed either FACT or APEX, both phase III comparator trials. Initially, all patients received febuxostat 80 mg/day (351 patients); however, the protocol was modified to randomize patients in a 2:2:1 ratio to open-label febuxostat 80 mg/day (299 [for a total of 650] patients), 120 mg/day (291 patients), or allopurinol (145 patients). As in APEX, most patients received allopurinol 300 mg/day, but the dose was adjusted to 100 mg/day in eight patients who had renal impairment. Treatment regimens could be modified at the investigators' discretion within the first 6 months of treatment. Patients with a sUA level greater than 6 mg/dl after 6 months of treatment were withdrawn from the study. The primary outcome measure was the proportion of patients with a sUA level less than 6 mg/dl evaluated at each visit. Other efficacy measures included the percent reduction in sUA levels from baseline, changes in the size and number of palpable tophi, and the frequency of gout flares requiring treatment. Although statistical analysis was included in the methods section by the study authors, neither p values or indicators of statistical significance were provided for any efficacy or safety measures.
The majority of patients in the EXCEL trial were Caucasian males with mean age older than 50 years. At least one palpable tophus was present in 20% of patients at baseline. There were no significant differences reported among treatment groups with regard to baseline characteristics, gout history, or comorbid conditions (statistics not provided). The majority of patients (98%) had normal renal function (serum creatinine concentration ? 1.5 mg/dl).
Patient withdrawal from EXCEL was significant, although anticipated in a trial of this duration. Of the 1086 patients enrolled, 422 (39%) withdrew before the end of the 3-year study. Primary reasons for withdrawal included lost to follow-up (8.3%), personal reasons (7.2%), adverse effects (7.2%), and treatment failures (6.4%). Patient compliance with assigned treatment was assessed at each visit by pill counts and was 95% in all treatment groups.
Whereas dosage titration was allowed during the study, most patients assigned to the febuxostat 80-mg group (606/650) were maintained at this dose. In the 120-mg group, however, there were 291 patients at enrollment; this number increased to 388 patients at 6 months. Of the 145 patients receiving allopurinol at baseline, only 92 patients were maintained on a stable dose of allopurinol at 6 months.
After the first month of therapy, 81% and 87% of patients receiving febuxostat 80 mg and 120 mg, respectively, met the goal sUA level compared with only 46% of allopurinol-treated patients. For the duration of the study, 80% or more of febuxostat-treated patients maintained sUA levels below 6 mg/dl. The percentage of allopurinoltreated patients reaching the sUA goal was reported as 82% at 12 months. However, this only accounted for patients initially treated with allopurinol and maintained on therapy and did not account for patients switched from febuxostat to allopurinol within the first 6 months. The mean percent reduction in sUA levels at the last visit from initial treatment was 47%, 53%, and 32% for febuxostat 80 mg, 120 mg, and allopurinol, respectively.
Overall, the size and number of palpable tophi decreased in patients regardless of treatment group; however, a greater percent decrease in size and number in the febuxostat-treated patients was noted. In addition, a greater percentage of patients receiving febuxostat versus allopurinol achieved complete resolution of tophi.
Table 3 summarizes the results of these longterm, open-label clinical trials that evaluated the efficacy of febuxostat for hyperuricemia in patients with gout.[53, 54]
Safety and Tolerability
Febuxostat has been evaluated in more than 2700 patients in clinical studies ranging from 4 weeks[49] to more than 5 years.[53] Febuxostat was generally well tolerated in patients with gout and hyperuricemia; most treatment-related adverse events were mild to moderate in severity. Compared with patients who have normal renal function, no increased frequency of adverse events in patients with moderate renal impairment has been noted, although use in these patients has been limited.[49–51, 53, 54]
In a pooled analysis of three phase III controlled studies, the most commonly reported adverse events for febuxostat were liver function test abnormalities (5.4%), rash (1.2%), nausea (1.0%), and arthralgias (0.8%).[50–52] In the 5-year extension trial, FOCUS, 91% of participants (106/116) reported at least one adverse event.[53] Frequencies of adverse events with febuxostat 40, 80, and 120 mg/day were similar to those with allopurinol; however, in APEX, participants taking febuxostat 240 mg/day experienced a statistically significant increase in the frequency of diarrhea and dizziness.[51] No significant differences were found in the overall frequency of adverse events between febuxostat at FDA-approved doses (40 and 80 mg/day) and placebo.[49]
The primary reasons for discontinuation of febuxostat vary depending on the trial, but adverse events were generally mild to moderate in severity. The main causes for withdrawal were liver function test abnormalities, gout flares, diarrhea, and rash. In APEX, the reasons for patient withdrawal were similar in all treatment groups, except for gout flares, which were more frequent with febuxostat than with allopurinol.[51] The frequency of gout flares in the placebo-controlled study was 35–55% of febuxostat-treated patients and 37% of those receiving placebo.[49] In FACT, the percentage of patients requiring treatment for gout flares peaked within the first 3 months and gradually decreased thereafter.[50] Rebound gout flares after prophylaxis discontinuation were higher with febuxostat 120 mg/day compared with febuxostat 80 mg/day or allopurinol (p<0.001 for both comparisons). During weeks 9–52 the overall rates of gout flares were similar in the febuxostat 80-mg/day, febuxostat 120-mg/day, and allopurinol 300-mg/day groups: 64%, 70%, and 64%, respectively; the frequency of gout flares gradually decreased throughout the trial to 8%, 6%, and 11%, respectively, during weeks 49–52. In order to prevent gout flares when initiating febuxostat, prophylactic treatment with an NSAID or colchicine is recommended.[43]
Although not apparent in individual trials, cardiovascular thromboembolic events (cardiovascular death, nonfatal myocardial infarction, and nonfatal stroke) have been observed at a higher rate with febuxostat (0.74/100 patient-yrs) than allopurinol (0.60/100 patient-yrs) when looking at pooled data. However, a direct causal relationship has not been established, and these differences were not statistically significant. Based on these data, the FDA required an additional study to evaluate the thromboembolic risk of febuxostat compared with allopurinol. Cardiovascular thromboembolic events in the CONFIRMS trial were few in number, both in total and in individual treatment groups. The rate of observed events was not higher with febuxostat than with allopurinol. Considering the available evidence, febuxostat possesses a reasonable risk-benefit balance, but additional long-term studies evaluating cardiovascular outcomes comparing febuxostat with allopurinol are necessary. Until the cardiovascular thromboembolic risk is fully elucidated, patients receiving febuxostat should be monitored for signs and symptoms of myocardial infarction and stroke.[43]
Transaminase level elevations greater than 3 times the upper limit of normal have been observed in clinical trials; no dose-effect relationship has been noted. The manufacturer recommends monitoring liver function at 2 and 4 months after starting febuxostat therapy, and periodically afterward.[43]
Drug Interactions
The role of febuxostat as a target and/or precipitant of pharmacologic and pharmacokinetic drug interactions was examined. Febuxostat may disrupt xanthine oxidase–dependent metabolism of theophylline, azathioprine, mercaptopurine, and didanosine. Case reports of toxicity when these drugs are administered concurrently with allopurinol have been reported. Although specific data regarding febuxostat are unavailable, similar interactions would be expected. Thus, concurrent use of febuxostat with theophylline, azathioprine, mercaptopurine, and didanosine should be avoided.[43, 55–57]
The effect of food and antacids on febuxostat and sUA concentrations was investigated in a crossover study of 92 healthy subjects.[43, 58] Food reduced the rate and extent of absorption of febuxostat; however, this was not associated with significant changes in sUA concentrations. Antacids reduced the rate, but not the extent, of febuxostat absorption. These findings suggest febuxostat can be administered with food or antacids without significant impact on response.
An in vitro trial assessed the drug-drug interaction potential of febuxostat with regard to binding characteristic to plasma albumin, and metabolism by CYP and UDPGT enzymes.[59] Febuxostat did not influence plasma albumin binding of ibuprofen or warfarin, nor did these drugs change plasma protein binding of febuxostat. Metabolism of febuxostat was widely distributed among UDPGT and CYP enzymes, which decreases the likelihood of interactions with drugs that may inhibit these enzyme systems. These findings suggest that febuxostat has a low overall drug-drug interaction potential.
Finally, interactions between febuxostat and drugs used in treatment of acute gouty arthritis were reviewed.[43, 60] Febuxostat did not impact the pharmacokinetics of indomethacin, naproxen, or colchicine, nor did these drugs significantly affect the pharmacokinetics of febuxostat in healthy individuals. Febuxostat may be safely administered in patients receiving colchicine, indomethacin, or naproxen for prophylaxis of gouty flares associated with urate-lowering therapy.
Dosing and Administration
The FDA has approved febuxostat for the treatment of hyperuricemia in patients with gout. The recommended starting dosage is 40 mg once/day. If patients do not achieve a sUA level below 6 mg/dl after 2 weeks with febuxostat 40 mg/day, the dose should be increased to 80 mg/day.[43] Febuxostat may be taken without regard to meals or antacids, as any observed pharmacokinetic changes (AUC and Cmax) are not considered to be clinically significant.[58]
Unlike allopurinol, febuxostat does not require dosage adjustments in patients with mild-to-moderate renal impairment (Clcr 30–89 ml/min); the starting dosage is the same as for patients with normal renal function. Febuxostat should be used with caution in patients with severe renal impairment (Clcr < 30 ml/min). Febuxostat has not been studied in patients undergoing dialysis. Febuxostat requires no dosage adjustment in patients with mild or moderate hepatic impairment (Child-Pugh class A or B). No studies, to our knowledge, have been conducted in patients with severe hepatic impairment (Child-Pugh class C); thus caution should be exercised in these patients.[43]
Due to the inherent risk for precipitating an acute gouty flare when beginning febuxostat, prophylaxis is strongly recommended. The manufacturer recommends colchicine or an NSAID.[43] If a gout flare occurs during febuxostat treatment, febuxostat should not be discontinued, but the acute gout attack should be managed individually for each patient. Prophylactic therapy may be beneficial for up to 6 months.
Conclusion
Febuxostat is the first agent for chronic management of hyperuricemia in patients with gout to be approved by the FDA in several decades. In well-controlled clinical trials in patients with gout, febuxostat reduced sUA levels to a greater extent than allopurinol. These differences were apparent regardless of baseline sUA levels or renal function, and were maintained with long-term use of febuxostat. In these trials, the doses of allopurinol were 300 mg/day or less. Higher doses of allopurinol may be necessary for treatment of gout, and allopurinol may not have fared as well relative to febuxostat since a fixed-dose strategy was used. The relative safety and efficacy of a higher dose of allopurinol have not been evaluated in randomized, prospective clinical trials. However, allopurinol 300 mg/day or less is the most commonly prescribed dose in clinical practice due to concerns of intolerability.
In addition to validating the safety and efficacy of febuxostat for hyperuricemia, the studies undertaken with febuxostat also confirm the concept that achievement and maintenance of sUA levels below the limits of saturation are important measures of long-term clinical success. Although the frequency of gout flares initially increases with mobilization of monosodium urate crystals, studies consistently demonstrated reduced rates of gout flares and decreased tophi size and/or number with long-term treatment aimed at reducing sUA levels below 6 mg/dl, regardless of which urate-lowering therapy was used.
Although febuxostat was generally well tolerated, adverse effects resulted in treatment discontinuation in approximately 10% of all patients. Liver function abnormalities, nausea, arthralgias, and rash were the most frequently reported adverse effects. In addition, febuxostat is significantly more expensive than allopurinol. Thus, allopurinol will likely remain the first choice for treatment due to its adverse-event profile and cost, as well its long history of use. However, febuxostat remains an attractive second-line option and may be advantageous in patients with renal impairment, intolerance to allopurinol, or the inability to attain sUA levels less than 6 mg/dl despite adequate therapy with available agents.
Top
From Medscape Medical News
Fructose Intake Associated With an Increased Risk for Gout
Emma Hitt, PhD
Authors and Disclosures
November 16, 2010 (Atlanta, Georgia) — Consuming sugar-sweetened sodas, orange juice, and fructose is associated with an increased risk for incident gout, according to new research findings from the Nurses' Health Study.
Hyon K. Choi, MD, DrPH, from Boston University School of Medicine, Massachusetts, presented the findings here at the American College of Rheumatology 2010 Annual Meeting. The results were also published online November 10 in the Journal of the American Medical Association.
The main message, said Dr. Choi, is that "if your patient has hyperuricemia, or gout, and if they are consuming sugary beverages, particularly containing fructose (i.e., sugar, not artificial sweeteners), then I would recommend them stopping or at least reducing their intake," Dr. Choi told Medscape Medical News.
Dr. Choi and colleagues analyzed data from the Nurses' Health Study, an American prospective cohort study spanning 22 years, from 1984 to 2006. Women with no history of gout at baseline (n = 78,906) provided information about their intake of beverages and fructose by filling out validated food frequency questionnaires.
Over the course of the study, 778 incident cases of gout were reported. Compared with the consumption of less than 1 serving per month of sugar-sweetened soda, the consumption of 1 serving per day was associated with a 1.74-fold increased risk for gout, and the consumption of 2 or more servings per day was associated with a 2.39-fold increased risk (P < .001 for trend).
Consumption of orange juice was associated with a 1.41-fold and 2.42-fold increased risk for 1 and 2 servings per day, respectively (P = .02 for trend).
For 1 and 2 servings of sugar-sweetened soda, the absolute risk differences were 36 and 68 cases per 100,000 person-years, respectively; for 1 and 2 servings of orange juice, the absolute risk differences were 14 and 47 cases per 100,000 person-years, respectively.
The consumption of diet soft drinks was not associated with the risk for gout (P = .27 for trend).
Compared with the lowest quintile of fructose intake, the multivariate relative risk for gout in the top quintile was 1.62 (95% confidence interval, 1.20 - 2.19; P = .004 for trend), indicating a risk difference of 28 cases per 100,000 person-years.
According to Dr. Choi, the mechanism of fructose and its effect on the pathology of gout is well understood.
"Administration of fructose to human subjects results in a rapid increase in serum uric acid and increased purine synthesis," he explained. "In addition, this effect is more pronounced in individuals with hyperuricemia or a history of gout."
In the published paper, the authors point out that because "fructose intake is associated with increased serum insulin levels, insulin resistance, and increased adiposity, the overall negative health effect of fructose is expected to be larger in women with a history of gout, 70% of whom have metabolic syndrome."
According to independent commentator George Bray, MD, from the Pennington Biomedical Research Center in Baton Rouge, Louisiana, this is another "nail in the coffin for the overuse of fructose-containing beverages."
"In a previous report, gout in men was associated with a higher intake of fructose (either sugar or high-fructose corn syrup from beverages)," he told Medscape Medical News. "This paper extends this using the Nurses' Health Study to show that the higher intake of fructose (soft drinks and juices) is associated with an increased risk of gout in women."
Dr. Bray added that it would be a good idea to include the fructose content of foods and beverages on the label for the public's information.
The study was not commercially funded. Dr. Choi reports receiving research grants and consulting fees from Takeda Pharmaceuticals North America. Dr. Bray has disclosed no relevant financial relationships.
JAMA. Published online November 10, 2010. Abstract
ACR 2010 Annual Meeting: Abstract L5. Presented November 10, 2010
Top
White Papules on the Ear: Discussion of Answer
Authors and Disclosures
Discussion of Answer
The Disease of Kings
The writings of Hippocrates first described gout in the fifth century BC. Famous sufferers of gout include Alexander the Great, Kubla Khan, Isaac Newton, Henry VIII, Louis XIV, John Wesley, Martin Luther, Sir Francis Bacon, and Benjamin Franklin. It was called the "the disease of kings and the king of diseases" because it usually struck the affluent. The most characteristic lesion of gout is the tophus, a deposit of crystals of monosodium urate that triggers a severe inflammatory reaction in the surrounding tissue. Although the first metatarsophalangeal joint is the most common site of initial inflammatory disease (podagra), visible tophi can also develop on the helix of the ear, olecranon bursa, Achilles tendon, extensor surface of the forearm, prepatellar bursa, fingertips, and toes.[1-3] Less commonly, tophi develop on the tongue, vocal cords, epiglottis, and penis.
Gout can arise from metabolic deficits that cause uric acid overproduction or from renal underexcretion of urates, or it may be secondary to hyperuricemia induced by coexisting disease or medications. Primary gout more commonly affects men, with the first attack occurring typically between the ages of 40 and 50 years. It is uncommon in premenopausal women, and some speculate that estrogen expedites renal excretion of uric acid. Family history of gout is reported by 10% to 20% of patients. The majority of patients with gout have tubular defects that reduce uric acid excretion; only 10% have excessive de novo synthesis. The metabolic disorders that predispose to gout are those that alter purine metabolism, including deficiencies in glucose-6-phosphatase (as in type 1 glycogen storage disease) and hypoxanthine-guanine phosphoribosyl transferase (HGPRT; severe deficiency is seen in Lesch-Nyhan syndrome) and overactivity of phosphoribosyl pyrophosphate synthetase (X-linked).[1,2,4]
Other factors that may contribute to the development of gout include alcohol use, hypertension, obesity, insulin resistance, high-purine diet, or lead exposure. Medications that decrease urate excretion include thiazide diuretics and salicylates (both block tubular secretion of uric acid), ethambutol, and cyclosporine. Secondary hyperuricemia may be due to myeloproliferative disease or renal insufficiency. Severe psoriasis and other diseases with high tissue nucleic acid turnover may also lead to hyperuricemia. Of note, uric acid is the terminal product of purine metabolism in humans; other mammals express uricase, which metabolizes uric acid to allantoin, a significantly more water-soluble product. The only other mammal prone to uric acid kidney stones is the Dalmatian dog, which suffers from these due to increased fractional excretion of uric acid.[1,2,4]
Clinical Stages of Gout
Gout progresses in 4 clinical stages:
1. Asymptomatic hyperuricemia: Serum uric acid level > 7 mg/dL (normal ranges: men, 4-6 mg/dL; women, 3-5 mg/dL). Asymptomatic hyperuricemia does not require treatment; high levels may be detected in 5% of adults, and only a minority of those with hyperuricemia develop gout.
2. Acute gout: Hyperuricemia has caused the precipitation of uric acid crystals from supersaturated extracellular fluid. A brisk inflammatory response develops, causing abrupt onset of pain and swelling. Acute attacks of gout commonly occur at night or in the early morning hours. The local inflammation may be accompanied by fever, leukocytosis, and an elevated erythrocyte sedimentation rate, and less commonly by hemolytic anemia, calcium level abnormalities, or hypothyroidism. The crystals stimulate monocytes and macrophages and growth factor and chemokine release. These signal neutrophil chemotaxis, complement activation, and phagocytosis of crystals. Ingested crystals cause lysosome rupture, and the leakage of lysozymes induces further tissue damage.
3. Interval or intercritical gout: The period between acute attacks, during which there are no symptoms and joint function is normal.
4. Chronic tophaceous gout: With prolonged hyperuricemia and acute episodes of gout, urate crystal deposits (tophi) may develop in the skin and soft tissues, synovial membranes, tendons, cartilage, and bone. In 20% to 30% of patients, uric acid renal stones will also form. Tophi usually develop 10 years or more after the onset of gout, but more rapid development is reported with myeloproliferative disorders and with deficiencies involving enzymes of purine metabolism. Tophi can resolve after 6-12 months of normouricemia.[2,4,5]
Differential Diagnosis
The differential diagnosis for acute gout includes "pseudogout" (chondrocalcinosis), osteoarthritis, psoriatic arthritis, Reiter's disease, septic joint, cellulitis, and traumatic injury. The tophi, which are yellow-white hard nodules, may mimic rheumatoid nodules, xanthomas, calcinosis cutis, or squamous-cell carcinoma.[6] The diagnosis of gout depends on identification of urate crystals in joint fluid or within tophi. These appear as negatively birefringent, needle-shaped crystals when studied with polarized light. Tophi are radiolucent, but reports suggest that they can be identified by ultrasound (by virtue of the central non-sonotransmitting substance) and by in vivo Raman spectroscopy. Studies with CT and MR for subcutaneous deposits have been reported.[7,8] Hyperuricemia is suggestive but not diagnostic in the absence of characteristic clinical findings. To the contrary, the serum uric acid level may be high, normal, or low during acute episodes of gout. There are reports of tophi developing without prior joint disease.[2]
Histologic Examination
Routine histologic examination of gout reveals amorphous, amphophilic material, with interspersed stellate cavities surrounded by giant cells, and lymphocytes forming chronic foreign-body granulomata. The cavities represent uric acid crystals that dissolved during standard processing in formalin or other water-based preservatives. In a clinical context suspicious for gout, tissue should be preserved in an alcohol-based fixative, such as Carnoy's fluid, or in 95% ethanol to allow retention of the urate crystals. With polaroscopy, the crystals appear as bright, refractile, yellow-brown needles. A red compensator allows visualization of yellow crystals when the light is parallel to the orientation of the compensator, and visualization of blue crystals when the light is perpendicular to the orientation of the compensator. Silver-based stains (Gomori's methenamine, De Galantha, or von Kossa) stain urate crystals dark brown. A fibrous capsule may be present around the tophus.[1,2]
Top
Urolithiasis/Nephrolithiasis: What's It All About?: Metabolic Evaluation
Authors and Disclosures
From Urologic Nursing
Urolithiasis/Nephrolithiasis: What's It All About?
Joan Colella; Eileen Kochis; Bernadette Galli; Ravi Munver
Authors and Disclosures
Posted: 01/17/2006; Urol Nurs. 2005;25(6):427-448, 475. © 2005 Society of Urologic Nurses and Associates
Urolithiasis/Nephrolithiasis: What's It All About?
Abstract and Introduction
Abstract
Urolithiasis (urinary tract calculi or stones) and nephrolithiasis (kidney calculi or stones) are well-documented common occurrences in the general population of the United States. The etiology of this disorder is mutifactorial and is strongly related to dietary lifestyle habits or practices. Proper management of calculi that occur along the urinary tract includes investigation into causative factors in an effort to prevent recurrences. Urinary calculi or stones are the most common cause of acute ureteral obstruction. Approximately 1 in 1,000 adults in the United States are hospitalized annually for treatment of urinary tract stones, resulting in medical costs of approximately $2 billion per year (Ramello, Vitale, & Marangella, 2000; Tanagho & McAninch, 2004).
Introduction
The term nephrolithiasis (kidney calculi or stones) refers to the entire clinical picture of the formation and passage of crystal agglomerates called calculi or stones in the urinary tract (Wolf, 2004). Urolithiasis (urinary calculi or stones) refers to calcifications that form in the urinary system, primarily in the kidney (nephrolithiasis) or ureter (ureterolithiasis), and may also form in or migrate into the lower urinary system (bladder or urethra) (Bernier, 2005). Urinary tract stone disease has been documented historically as far back as the Egyptian mummies (Wolf, 2004).
Prevalence
As much as 10% of the U.S. population will develop a kidney stone in their lifetime. Upper urinary tract stones (kidney, upper ureter) are more common in the United States than in the rest of the world. Researchers attribute the incidence of nephrolithiasis in the United States to a dietary preference of foods high in animal protein (Billica, 2004).
Age and Gender
The literature reflects the incidence of kidney (renal) stone formation to be greater among white males than black males and three times greater in males than females. Although kidney stone disease is one-fourth to one-third more prevalent in adult white males, black males demonstrate a higher incidence of stones associated with urinary tract infections caused by urea-splitting bacteria (Munver & Preminger, 2001).
Kidney stones are most prevalent between the ages of 20 to 40, and a substantial number of patients report onset of the disease prior to the age of 20 (Munver & Preminger, 2001; Pak, 1979, 1987). The lifetime risk for kidney stone formation in the adult white male approaches 20% and approximately 5% to 10% for women. The recurrence rate for kidney stones is approximately 15% in year 1 and as high as 50% within 5 years of the initial stone (Munver & Preminger, 2001; Spirnak & Resnick, 1987).
Pathophysiology of Nephrolithiasis
Any factor that reduces urinary flow or causes obstruction, which results in urinary stasis or reduces urine volume through dehydration and inadequate fluid intake, increases the risk of developing kidney stones. Low urinary flow is the most common abnormality, and most important factor to correct with kidney stones. It is important for health practitioners to concentrate on interventions for correcting low urinary volume in an effort to prevent recurrent stone disease (Munver & Preminger, 2001; Pak, Sakhaee, Crowther, & Brinkley, 1980).
Contributing Factors of Nephrolithiasis
Sex. Males tend to have a three times higher incidence of kidney stones than females. Women typically excrete more citrate and less calcium than men, which may partially explain the higher incidence of stone disease in men (National Institutes of Health [NIH], 1998-2005).
Ethnic Background. Stones are rare in Native Americans, Africans, American Blacks, and Israelis (Menon & Resnick, 2002).
Family History. Patients with a family history of stone formation may produce excess amounts of a mucoprotein in the kidney or bladder allowing crystallites to be deposited and trapped forming calculi or stones. Twenty-five percent of stone-formers have a family history of urolithiasis. Familial etiologies include absorptive hypercalciuria, cystinuria, renal tubular acidosis, and primary hyperoxaluria (Munver & Preminger, 2001).
Medical History. Past medical history may provide vital information about the underlying etiology of a stone's formation (see Table 1 ). A positive medical history of skeletal fracture(s) and peptic ulcer disease suggests a diagnosis of primary hyperparathyroidism. Intestinal disease, which may include chronic diarrheal states, ileal disease, or prior intestinal resection, may be a predisposition to enteric hyperoxaluria or hypocitraturia. This may result in calcium oxalate nephrolithiasis because of dehydration and chemical imbalances (see Figure 1). Irritable bowel disease or intestinal surgery may prevent the normal absorption of fat from the intestines and alter the manner in which the intestines process calcium or oxalate. This may also lead to calculi or stone formation. Patients with gout may form either uric acid stones (see Figure 2) or calcium oxalate stones. Patients with a history of urinary tract infections (UTIs) may be prone to infection nephrolithiasis caused by urea-splitting bacteria (Munver & Preminger, 2001). Cystinuria is a homozygous recessive disease leading to stone formation. Renal tubular acidosis is a familial disorder that causes kidney stones in most patients who have this disorder.
Click to zoom
(Enlarge Image)
Figure 1.
Calcium Oxalate Stone
Click to zoom
(Enlarge Image)
Figure 2.
Uric Acid Stone
Dietary Habits. Fluid restriction or dehydration may cause kidney stone formation. Dietary intake that is high in sodium, oxalate, fat, protein, sugar, unrefined carbohydrates, and ascorbic acid (vitamin C) has been linked to stone formation. Low intake of citrus fruits can result in hypocitraturia, which may increase an individual's risk for developing stones.
Environmental Factors. Fluid intake consisting of drinking water high in minerals may contribute to kidney stone development. Another contributing factor may be related to geographical variables such as tropical climates (NIH, 1998-2005). Stone formation is greater in mountainous, high-desert areas that are found in the United States, British Isles, Scandinavia, Mediterranean, Northern India, Pakistan, Northern Australia, Central Europe, Malayan Peninsula, and China (Menon & Resnick, 2002). Affluent societies have a higher rate of small upper tract stones whereas large struvite (infection) stones occur more commonly in developing countries (see Figure 3). Bladder stones are more common in underserved countries and are likely related to dietary habits and malnutrition (Menon & Resnick, 2002).
Click to zoom
(Enlarge Image)
Figure 3.
Struvite Stone
Medications. Medications such as ephedrine, guaifenesin, thiazide, indinavir, and allopurinol may be contributory factors in the development of calculi (see Drug-Induced Nephrolithiasis).
Occupations. Occupations in which fluid intake is limited or restricted or those associated with fluid loss may be at greater risk for stone development as a result of decreased urinary volume.
Clinical Presentation
Symptoms may vary and depend on the location and size of the kidney stones or calculi within the urinary collecting system. In general, symptoms may include acute renal or ureteral colic, hematuria (microscopic or gross blood in the urine), urinary tract infection, or vague abdominal or flank pain. A thorough history and physical examination, along with selected laboratory and radiologic studies, are essential to making the correct diagnosis. Small nonobstructing stones or "silent stones" located in the calyces of the kidney are sometimes found incidentally on x-rays or may be present with asymptomatic hematuria. Such stones often pass without causing pain or discomfort.
Kidney Stone Symptoms
Stones in the kidneys can become lodged at the junction of the kidney and ureter (ureteropelvic junction), resulting in acute ureteral obstruction with severe intermittent colicky flank pain. Pain can be localized at the costovertebral angle. Hematuria may be present intermittently or persistently and it may be microscopic or gross.
Ureteral Stone Symptoms
Stones that can pass into the ureter may produce ureteral colic, which is an acute, sharp, spasm-like pain located in the flank. Hematuria may be present. Stones moving down the ureter to the pelvic brim and iliac vessels will produce spasms with intermittent, sharp, colicky pain radiating to the lateral flank and around the umbilical region.
As a stone passes through the distal ureter, near the bladder, the pain remains sharp but with a waxing and waning quality. Relief is offered when the spasm subsides or the pain may intensify and radiate to the groin, testicles, or labia. Nausea, vomiting, diaphoresis, tachycardia, and tachypnea may be present and patients are typically uncomfortable.
Bladder Stone Symptoms
Once a stone enters the bladder, dysuria, urgency, and frequency may be the only symptoms experienced. Immediate relief of symptoms occurs once the stone passes out of the bladder.
Kidney Stone Complications
Occasionally, stones can injure the kidneys by causing infection, resulting in fever, chills, and loss of appetite or urinary obstruction. If a UTI accompanies the urinary obstruction, pyelonephritis or urosepsis can occur. If stones are bilateral, they can cause renal scarring and damage, resulting in acute or chronic renal failure.
Calcium Nephrolithiasis
Hypercalciuria
Eighty to eighty-five percent of calculi or stones diagnosed in the United States are idiopathic (spontaneous and without recognizable cause) or primary. These stones are comprised of calcium and are due to excess calcium excretion in the urine, usually exceeding 200 mg/24-hour collection (see Table 2 ).
Absorptive Hypercalciuria. The primary abnormality in absorptive hypercalciuria is increased absorption of calcium. Absorptive hypercalciuria Type I is more severe and characterized by a high urine calcium level, with high or low dietary calcium intake. There is a normal serum level of calcium and phosphorus and normal or low serum level of parathyroid hormone.
Absorptive hypercalciuria Type II is a mild to moderate form of hypercalciuria and less severe than Type I. Type II hypercalciuria only occurs with high calcium intake. There is normal urinary calcium excretion while fasting or on a restricted calcium diet (Munver & Preminger, 2001).
Renal Hypercalciuria. Renal hypercalciuria or "renal leak mechanism" is thought to be caused by impairment in renal tubular reabsorption of calcium (Munver & Preminger, 2001; Pak, 1979). The loss of calcium in the urine leads to stimulation of the parathyroid function, causing elevated 1,25-vitamin D and increased intestinal absorption to maintain serum levels of calcium.
Primary Hyperparathyroidism. Excess parathyroid hormone (PTH) results in increased bone resorption of calcium from bone and accounts for less than 5% of stones. Parathyroid hyperplasia or adenomas secrete excess parathyroid hormone causing increased intestinal absorption of calcium, increased 1,25-vitamin D3, and increased bone demineralization and calcium release from bone. Laboratory tests reveal elevated parathyroid hormone and serum calcium levels.
Less frequent causes of hypercalciuria include chronic immobilization, metastatic cancer to bone, multiple myeloma, and vitamin D intoxication. Calcium stones have been described as appearing spiculated, dotted, mulberry, or jackstone in appearance.
Hyperoxaluria
Hyperoxaluria is defined by urinary oxalate excretion in excess of 45 mg/day (Munver & Preminger, 2001). The cause of these calcium stones can be related to primary or secondary factors.
Primary Hyperoxaluria. Type I hyperoxaluria is a rare autosomal recessive disorder that begins in childhood in which a defect of the hepatic enzyme alanine-glyoxylate aminotransfease (AGT) causes increased urinary excretion of oxalic, glycolic, and glyoxylic acids (Danpure, 1994; Menon & Mahle, 1982; Munver & Preminger, 2001). This condition is characterized by nephrocalcinosis, oxalate deposition in tissues, and renal failure resulting in death before age 20, if untreated. Diagnosis is made through percutaneous liver biopsy and evaluation of the amount and distribution of AGT in liver specimens.
Type II hyperoxaluria is a very rare deficiency of the hepatic enzymes (D-glycerate dehydrogenase and late reductase) resulting in increased urinary oxalate and glycerate excretion. This results in the development of nephrocalcinosis, tubulointerstitial nephropathy, and chronic renal failure.
Secondary Hyperoxaluria (Dietary). Approximately 80% of urinary oxalate is synthesized within the liver and a small percentage (20%) from dietary intake. Overindulgence of diets rich in oxalates can contribute to hyperoxaluria through intestinal absorption of oxalate. This includes foods such as rhubarb, green leafy vegetables, spinach, cocoa, beer, coffee or tea, or excess ascorbic acid (vitamin C) intake.
Enteric Hyperoxaluria. The primary site of oxalate absorption is the distal colon. Intestinal malabsorption can cause excess oxalate absorption due to various diseases such as chronic diarrhea/short bowel inflammatory disease, gastric or small bowel resection surgery. Additional secondary causes can be the result of low urinary output from intestinal fluid loss, low urinary citrate due to hypokalemia and metabolic acidosis, and low magnesium levels due to impaired intestinal magnesium absorption.
Hyperuricosuria
Hyperuricosuria (excessive urinary uric acid) accounts for 10% of calcium stones. There is a genetic predisposition, common in men, for stone development due to high uric acid levels and excess uric acid excretion, which results in hyperuricosuria. Excessive uric acid excretion can be found in primary gout, and secondary conditions of purine overproduction including myeloproliferative disorders such as acute leukemia, glycogen storage disease, and malignancy. Hyperuricosuriais often caused by excess intake of purine in meat, fish, and poultry. This purine diet causes a low urinary pH. The features of hyperuricosuric calcium oxalate nephrolithiasis include elevated urinary uric acid (> 600 mg/day), a normal serum calcium level, normal urinary calcium and oxalate levels, normal fasting and calcium load response, and urinary pH typically < 5.5.
Hypocitraturia
Hypocitraturic calcium nephrolithiasis may exist as an isolated abnormality (10%) or more commonly in combination with other metabolic disorders (50%) (Menon & Mahle, 1982; Pak, 1987; Pak, 1994). Acid-base status, acidosis in particular, is the most important factor affecting the renal handling of citrate, with increased acid levels resulting in diminished endogenous citrate production. Low urinary citrate causes the urinary environment to become supersaturated with calcium salts, promoting nucleation, growth, and aggregation, resulting in stone formation.
Distal Renal Tubular Acidosis. A more common cause of hypocitraturia is distal renal tubular acidosis (RTA). Acidosis impairs urinary citrate excretion by enhancing renal tubular reabsorption of citrate as well as by reducing its synthesis (Pak, 1982). Distal RTA can be complete or incomplete. In both forms, hypercalciuria and profound hypocitraturia may be associated. In combination with alkaline urine, the patient is at risk for developing calcium oxalate or calcium phosphate stones (Munver & Preminger, 2001; Preminger, Sakhaee, Skurla, & Pak, 1985).
Chronic Diarrheal Syndrome. Chronic diarrheal syndrome causes a loss of alkali in the form of bicarbonate through the gastrointestinal tract resulting in metabolic acidosis with subsequent impairment in citrate synthesis (Munver & Preminger, 2001; Rudman et al., 1980). The decreased citrate production causes a lower urinary concentration of citrate. Patients with chronic diarrheal syndrome may have additional risk factors for stone formation such as low urine volumes and hyperoxaluria.
Thiazide-induced Hypocitraturia. Thiazide diuretics can produce hypokalemia (low potassium) leading to intracellular acidosis. This acidotic state inhibits the synthesis of citrate, resulting in hypocitraturia. The essential mechanism is the inhibition of citrate production, which is a consequence of chronic acidosis (Nicar, Peterson, & Pak, 1984).
Idiopathic Hypocitraturia. Mechanisms that account for hypocitraturia in this condition include a high animal protein diet (with an elevated acid-ash content), strenuous physical exercise (causing lactic acidosis), high sodium intake, and intestinal malabsorption of citrate.
Hypomagnesuria
Magnesium, an inhibitor of calcium nephrolithiasis, increases the solubility product of calcium oxalate and calcium phosphate. Hypomagnesuria is defined as urinary magnesium excretion < 50 mg/day. Many patients with nephrolithiasis will report a limited intake of magnesium-rich foods such as nuts and chocolate, suggesting the dietary basis of this condition.
Gouty Diathesis
Gouty diathesis (predisposition to uric acid or calcium stones) may appear in a latent or an early phase of classic gout, or it may manifest fully with gouty arthritis and hyperuricemia. Patients develop renal stones composed purely of uric acid, uric acid in combination with calcium oxalate or calcium phosphate, or stones that reveal only calcium oxalate or calcium phosphate. Some patients may form uric acid or calcium stones (Khatchadourian, Preminger, Whitson, Adams-Huet, & Pak, 1995). The invariant feature of this condition is persistently acidic urine (pH < 5.5) and no specific cause has been detected for the low urinary pH.
Non-Calcium Nephrolithiasis
Uric Acid Stones
Uric acid stones may form in the presence of gouty diathesis or in secondary causes of purine overproduction. Secondary causes of these stones can include chronic diarrheal states such as ileostomy, ulcerative colitis, and Crohn's disease. These chronic diarrheal states predispose uric acid precipitation, acidic urinary pH due to bicarbonate loss in stool or urinary ammonium excretion defects, and reduced urinary volume (see Table 3 ).
Cystine Stones
Cystine stones are due to a rare, congenital condition resulting in large amounts of cystine (an amino acid) in the urine. Cystinuria causes cystine stones, requiring lifelong therapy (Urology Channel, 1998). This disorder typically presents during childhood and adolescence (Bernier, 2005). The diagnosis should be suspected for patients with an early onset of nephrolithiasis, a significant family history, or recurrent stone disease. A positive sodium-nitroprusside urine test or the presence of flat, hexagonal crystals in urinary sediment provides a presumptive diagnosis of cystine stone disease.
Struvite Stones (Infection Stones)
These stones are caused by UTIs, which affect the chemical balance of the urine, raising the pH. Urea-splitting bacteria (for example, Proteus, Klebsiella, and Pseudomonas) release chemicals into the urinary tract, neutralizing acid in the urine, enabling the bacteria to grow quickly and form struvite stones. Struvite stones are difficult to treat because the stone surrounds a nucleus of bacteria, which is protected from antibiotic therapy (Bernier, 2005). These stones are three times more common in women than men due to an increased incidence of UTIs in women. Struvite stones are most commonly found in patients with chronic infections as well as patients with anatomic or functional abnormalities of their urinary tract allowing stasis of urine and chronic bacteriuria. These abnormalities include neurogenic bladder, diverticuli, and strictures (Lingeman, Siegel, & Steele, 1995).
Struvite stones are jagged (staghorns) in appearance and may be quite large at the time of initial presentation.
Other Causes of Nephrolithiasis
Low Urine Volume
Low urine output is defined as < 1 liter/day. The typical etiologies of nephrolithiasis are low fluid intake and reduced urine volume. Other possible causes of low urine volume include chronic diarrheal syndromes that result in large fluid loses from the gastrointestinal tract and fluid loss from perspiration, or evaporation from lungs or exposed tissue. Stone formation may be initiated by a low urine output, providing a concentrated environment for substances such as calcium, oxalate, uric acid, and cystine to begin crystallization.
No Pathological Disturbance
In approximately 35% of the stone-forming population, no identifiable risk factors for stone formation can be found (Levy, Adams-Huet, & Pak, 1995). This group includes individuals with normal serum calcium and PTH, normal fasting and calcium load response, normal urine volumes, normal pH, calcium, oxalate, uric acid, citrate, and magnesium levels in the presence of calcium nephrolithiasis.
Drug-Induced Nephrolithiasis
Ephedrine Calculi. Ephedrine and its metabolites (norephedrine, pseudoephedrine, and norpseudoephedrine) are sympathomimetic agents that have been used for the treatment of enuresis, myasthenia gravis, narcolepsy, and rhinorrhea (Powell, Hsu, Turk, & Hruska, 1998). In addition to numerous side effects, ephedrine and its derivatives have been associated with the production of urinary stones (Blau, 1998). The diagnosis of these calculi is similar to that of other radiolucent calculi. Twenty-four hour urine metabolic analyses can aid in identifying ephedrine or its respective metabolites.
Guaifenesin Calculi. Guaifenesin is a widely used expectorant that has been recently associated with nephrolithiasis. Guaifenesin calculi are radiolucent and present in patients who ingest this medication in excess. Twenty-four hour urine metabolic analysis can aid in the identification of guaifenesin or b-2-methoxyphenoxy-lactic acid.
Indinavir Calculi. Indinavir sulfate (Crixivan®) is currently one of the most frequently used protease inhibitors used against human immunodeficiency virus, the virus that causes AIDS. The incidence of calculi in patients taking indinavir ranges from 3% to 20% (Schwartz, Schenkman, Armenakas, & Stoller, 1999). Indinavir calculi are radiolucent when they are pure, and are radiopaque when they contain calcium.
Xanthine Calculi. These stones occur due to a rare hereditary condition with xanthine oxidase deficiency (see Figure 4). The deficiency in this enzyme results in decreased levels of serum and urinary uric acid. Acidic urine causes crystal precipitation, resulting in stone formation (Bernier, 2005). These stones are also seen in patients treated with iatrogenic inhibition of xanthine oxidase with xanthine oxidase inhibitors for hyperuricosuria such as allopurinol.
Click to zoom
(Enlarge Image)
Figure 4.
Xanthine Stone
Diagnosis of Kidney Stones
Urolithiasis can mimic other etiologies of visceral pain. It is imperative to consider causes of surgical abdomens such as appendicitis, cholecystitis, peptic ulcer, pancreatitis, ectopic pregnancy, and dissecting aneurysm in patients who present with abdominal pain. Initial assessment includes a thorough history and physical examination, basic serum and urine chemistries, and a radiologic imaging study. First-time stone formers may benefit from a more detailed laboratory evaluation to identify causal factors for stone formation. Multiple or recurrent stone-formers (metabolically active stone formers) require a more comprehensive laboratory evaluation (NIH, 1998-2005).
Metabolic Evaluation
The primary objective of a diagnostic evaluation of nephrolithiasis should be to efficiently and economically identify the particular physiological defect present in the patient to enable the selection of specific and rational therapy. The evaluation should be able to identify the metabolic disorders responsible for recurrent stone disease, including cystinuria, distal renal tubular acidosis, enteric hyperoxaluria, gouty diathesis, and primary hyperparathyroidism.
A detailed history and physical examination are imperative for both first-time stone-formers and recurrent stone-formers. Past medical history emphasis should include information about previous UTIs, diet and fluid intake, medications including vitamin intake, bowel disease, gout, renal disease, bone or parathyroid disease, and bowel surgery.
First-time stone-formers may undergo an abbreviated diagnostic evaluation such as stone analysis, urinalysis, culture and sensitivity, and a comprehensive metabolic panel, which includes serum calcium, uric acid, and phosphorus. Recurrent stone-formers and first-time stone-formers are at risk for recurrence. Both will benefit from an extensive diagnostic evaluation such as stone analysis, urinalysis, culture and sensitivity, comprehensive metabolic panel, which includes serum calcium, uric acid, and phosphorus, parathyroid hormone level, and 24-hour urine collections (random and after being on a special diet). Patients at risk include children, middle-aged white males with a family history of stones, and patients with intestinal disease (chronic diarrheal or malabsorptive states), gout, nephrocalcinosis, osteoporosis, pathologic skeletal fractures, or urinary tract infection. Stones composed of cystine, struvite, or uric acid should undergo a complete metabolic workup (Preminger, Peterson, Peters, & Pak, 1985).
Urine Assessment
Urinalysis with urine culture and sensitivity are mandatory tests. Reports may reveal microscopic or gross hematuria and pyuria with or without infection. Increase or decrease in urine pH and the presence of crystals may give clues to whether the stone is alkaline or acidic. A cyanide nitroprusside test will screen for suspected cystinuria.
Two 24-hour urine collections should be performed evaluating calcium, sodium, phosphorus, magnesium, oxalate, uric acid, citrate, sulfate, creatinine, pH, and total volume. The first 24-hour urine should be a random specimen. The second 24-hour urine should be obtained after the patient has been on a sodium, oxalate, and calcium-restricted diet.
Serum Assessment
Complete blood count (CBC) may reveal an elevated white blood count (WBC) suggesting urinary systemic infection, or depressed red blood cell count suggesting a chronic disease state or severe ongoing hematuria. Serum electrolytes, BUN, creatinine, calcium, uric acid, and phosphorus assess current renal function, dehydration, and the metabolic risk of future stone formation. An elevation in PTH level will confirm a diagnosis of hyperparathyroidism.
Radiologic Assessment
Intravenous Pyelography (IVP). Intravenous pyelography (urography) has long been considered the primary diagnostic study of choice for identifying urinary tract calculi. The IVP provides anatomical and functional information, identifies the precise size and location of a stone, the presence and severity of the obstruction, and renal or ureteral abnormalities. For these reasons, the IVP has been among the most important diagnostic tests that may enable successful management decisions.
Computed Tomography (CT) Scan
CT scan (with and without contrast) is believed to be the best radiographic examination for acute renal colic as it creates images of the urinary tract and shows delayed penetration of intravenous contrast through the obstructed kidney. The delayed penetration of the contrast through an obstructed kidney is the hallmark of acute urinary obstruction. The CT findings indicative of acute urinary obstruction secondary to a stone would include renal enlargement, hydronephrosis, ureteral dilatation, perinephric stranding, and periureteral edema (Katz, Lane, & Sommer, 1996; Smith, Verga, Dalrymple, McCarthy, Rosenfield, 1996). Other conditions that can mimic ureteral colic can be identified as well as anatomic abnormalities and obstruction. For many reasons, the CT scan is considered superior to an IVP in detecting both renal and ureteral calculi, and is routinely performed on most patients in which a diagnosis of urolithiasis is suspected (Smith, Verga, Dalrymple, McCarthy, & Rosenfield, 1996).
Radionuclide Imaging
Renal scan is considered the gold standard for assessing renal function, especially in the setting of recurrent or long-standing nephrolithiasis. It is noninvasive, does not require any special preparation or bowel preparation, exposes the patient to minimal radiation, and is nearly free of allergic complications.
Plain X-Rays
Plain abdominal X-rays entailing a flat plate radiograph of kidney, ureter, and bladder (KUB) will identify renal stones that are radiopaque (Department of the Navy Bureau of Medicine and Surgery, 2004). Abdominal X-rays are helpful in documenting the number, size, and location of stones in the urinary tract and the radiopacity may provide information on the type of stones present. Plain abdominal films can be useful in identifying nephrocalcinosis, suggestive of hyperparathyroidism, primary hyperoxaluria, renal tubular acidosis, or sarcoidosis.
Renal Ultrasound
Ultrasonography can be used as a screening tool for hydronephrosis or stones within the kidney or renal pelvis. A renal ultrasound can also determine the amount of renal parenchyma present in an obstructed kidney, in addition to the presence of stones. The ultrasound can be used in combination with plain abdominal radiograph to determine hydronephrosis or ureteral dilation (Wolf, 2004). This may be helpful in assessment during pregnancy (see Table 4 ).
Medical Management
Effective kidney stone prevention is dependent on the stone type and identification of risk factors for stone formation (see Table 5 & Table 6 ). An individualized treatment plan incorporating dietary changes, supplements, and medications can be developed to help prevent the formation of new stones. Certain conservative recommendations should be made for all patients regardless of the underlying etiology of their stone disease. Patients should be instructed to increase their fluid intake in order to maintain a urine output of at least 2,000 ml/day. Patients should also limit their dietary oxalate and sodium intake, thereby decreasing the urinary excretion of oxalate and calcium. A restriction of animal proteins is encouraged for patients with "purine gluttony" and hyperuricosuria.
Hypercalciuria (General)
Besides treating underlying disease, management of hypercalciuria includes:
Low calcium diet (about 400 mg calcium).
Distilled water, if high calcium content in water supply
Limit vitamin C (< 0.5g/day).
High sodium intake
Thiazide diuretics
Cellulose phosphate
Orthophosphate
Absorptive Hypercalciuria –Type I
Thiazides are commonly used for the management of absorptive hypercalciuria Type I as these medications stimulate calcium reabsorption in the distal nephron, preventing formation of kidney stones by reducing the amount of calcium in the urine. Thiazides force a mandatory increase in urinary volume but can cause electrolyte disorders. Side effects include decreased level of potassium, frequent urination, sexual dysfunction, and increased triglycerides.
Less-common medications used for treatment include orthophosphate, sodium cellulose phosphate, and urease inhibitors. Orthophosphate and sodium cellulose phosphate reduce the absorption of calcium from the intestines thereby reducing calcium in the urine. The urease inhibitors dissolve crystals and struvite kidney stones and prevent formation of new crystals. Side effects can include a bad taste in the mouth, diarrhea, and dyspepsia.
Neither sodium cellulose phosphate nor thiazide corrects the basic, underlying physiological defect in absorptive hypercalciuria. Sodium cellulose phosphate should be used in patients with severe absorptive hypercalciuria Type I (urinary calcium > 350 mg/day) or in those resistant to or intolerant of thiazide therapy. In patients with absorptive hypercalciuria Type I, who may be at risk for bone disease (for example, growing children and post-menopausal women), or who presently have bone loss, thiazide may be the medication of first choice. Sodium cellulose phosphate may be substituted for short-term therapy when thiazide action is decreased.
Potassium supplementation (Urocit-K®, Polycitra-K® crystalor syrup) should be added when using thiazide therapy to prevent hypokalemia and decrease urinary citrate excretion. A typical treatment program might include chlorthalidone 25 mg/day. Potassium citrate 15 to 20 mEq twice/day should be provided with both of these diuretics. Side effects include abdominal discomfort, nausea, and vomiting.
Absorptive Hypercalciuria – Type II
In absorptive hypercalciuria Type II, specific drug therapy may not be necessary since the physiologic defect is not as severe as in absorptive hypercalciuria Type I. Many patients show disdain for drinking fluids and excreting concentrated urine. A low intake of calcium (400-600 mg/day) and a high intake of fluids (sufficient to achieve a minimum urine output of > 2 liters/day) would be acceptable treatment. Normal urine calcium excretion would be restored by dietary calcium restriction alone, and the increase in urine volume would help reduce urinary saturation of calcium oxalate.
Renal Hypercalciuria
Thiazides are indicated for the treatment of renal hypercalciuria. This diuretic can correct the renal leak of calcium by augmenting calcium reabsorption in the distal tubule and by causing extracellular volume depletion and stimulating proximal tubular reabsorption of calcium.
Hyperoxaluria
Oral administration of large amounts of calcium (0.25 g to 1.0 g four times/day) or magnesium has been recommended for controlling enteric hyperoxaluria. A high fluid intake is recommended to assure adequate urine volume in patients with enteric hyperoxaluria. Calcium citrate may theoretically have a role in the management of enteric hyperoxaluria. This treatment may lower urinary oxalate by binding oxalate in the intestinal tract. Calcium citrate may also raise the urinary citrate level and pH. Side effects are constipation, gas, and increased calcium leak. Cholestyramine is also another method used to treat calcium oxalate stones. Cholestyramine binds to bile in the intestines which limits the amount of oxalate absorbed from the intestines, therefore less oxalate is excreted in the urine. Side effects include constipation, abdominal pain, gas, and heartburn.
Hyperuricosuria
Allopurinol (300 mg/day) is the drug of choice in patients with hyperuricosuric calcium oxalate nephrolithiasis (with or without hyperuricemia) because of its ability to reduce uric acid synthesis and lower urinary uric acid by inhibition of the enzyme xanthine oxidase. The usual dose is 300 mg/day; however, the dosage should be reduced in patients with renal insufficiency. Side effects are rash, diarrhea, and increased liver enzymes.
Potassium citrate represents an alternative to allopurinol in the treatment of this condition. Use of potassium citrate in hyperuricosuric calcium oxalate nephrolithiasis is warranted since citrate has an inhibitory activity with respect to calcium oxalate (and calcium phosphate) crystallization, aggregation, and agglomeration. Potassium citrate (30 to 60 mEq/day in divided doses) may reduce the urinary saturation of calcium oxalate.
Hypocitraturia
For patients with hypocitraturic calcium oxalate nephrolithiasis, treatment with potassium citrate can restore normal urinary citrate, thus lowering urinary saturation of calcium and inhibiting crystallization of calcium salts.
Distal Renal Tubular Acidosis
Potassium citrate therapy is able to correct metabolic acidosis and hypokalemia found in patients with distal RTA. It will also restore normal urinary citrate levels, although large doses (up to 120 mEq/day) may be required for severe acidosis. Since urinary pH is generally elevated in patients with RTA, the overall rise in urinary pH is small. Citrate is a significant urinary calcium stone inhibitor that retards crystallization of calcium oxalate and calcium phosphate. Potassium citrate binds to calcium in the urine, preventing formation of crystals and raising the urinary citrate level and pH. It will effectively alkalinize the urine, which makes it useful in the treatment, dissolution, and prevention of uric acid stones. Urinary pH should be monitored periodically during citrate therapy because of excessive alkalinization. Side effects are mucous loose stools and minor GI complaints. Sodium citrate and citric acids are other alkalizing agents used to prevent kidney stones by inhibiting stone formation through alkalization.
Chronic Diarrheal States
Patients with chronic diarrhea frequently have hypocitraturia due to bicarbonate loss from the intestinal tract. Potassium citrate therapy can significantly reduce the stone formation rate in these patients. The dose of potassium citrate is dependent on the severity of hypocitraturia in these patients. The dosage ranges from 60 to 120 mEq/day in three to four divided doses.
Gouty Diathesis
The major objective in the management of gouty diathesis is to increase the urinary pH above 5.5, preferably to a level between pH 6.0 and 6.5. Potassium citrate is the drug of choice in managing patients with gouty diathesis. Potassium citrate is an adequate alkalinizing agent, capable of maintaining urinary pH at approximately 6.5 at a dose of 30 to 60 mEq per day in two divided doses.
Cystinuria
The objective for treatment of cystinuria is to reduce the urinary concentration of cystine to a level below its solubility limit (200-250 mg/liter). The initial treatment program includes a high fluid intake and oral administration of soluble alkali (potassium citrate) at a dose sufficient to maintain the urinary pH at 6.5 to 7.0. When this conservative program is ineffective, d-penicillamine or alpha-mercaptopropionylglycine (1,000 to 2,000 mg/day in divided doses) has been used. Potassium citrate is absorbed to prevent uric acid stones as it binds to calcium in urine, preventing formation of crystals. Sodium bicarbonate makes the urine less acidic, which makes uric acid or cystine kidney stone formation less likely. Possible side effects include increased formation of calcium-type stones, fluid retention, and sodium in blood. Urinary pH should be monitored periodically during citrate therapy because excessive alkalinization may occur, which can increase the risk of calcium phosphate precipitation and stones. Side effects are mucous loose stools and minor GI complaints. Sodium citrate and citric acid are other alkalizing agents used to prevent kidney stones by inhibiting stone formation through alkalization.
Struvite (Infection) Lithiasis
Acetohydroxamic acid (AHA) is a urease inhibitor that retards stone formation by reducing the urinary saturation of struvite. When administered at a dose of 250 mg three times/day, it may prevent recurrence of new stones and inhibit the growth of stones in patients with chronic urea-splitting infections. In addition, in a limited number of patients, use of AHA has resulted in dissolution of existing struvite calculi. Side effects that have developed are deep venous thrombosis, headache, hemolytic anemia, and depression.
Drug-Induced Nephrolithiasis
Ephedrine Calculi. There are no limited studies that address the management of these calculi. As with other calculi, a urine output of at least two liters/day is recommended.
Guaifenesin Calculi. As with ephedrine calculi, there are no limited studies regarding pharmacologic management of these calculi.
Indinavir Calculi. Initial measures in the management of these calculi should focus on hydration and analgesia as well as drug discontinuation and substitution with another protease inhibitor.
Xanthine Calculi. The medical management of xanthine calculi is limited because the solubility of these calculi is essentially invariable within physiologic pH ranges. Currently the recommendation includes a fluid intake of at least three liters/day. If significant quantities of other purines are present in the urine, then urinary alkalization with potassium citrate in the range of 6.0 to 6.5 is indicated to prevent hypoxanthine or uric acid calculi.
Medical Management
The nurse conducts a comprehensive nursing assessment to include all contributing factors such as dietary history and fluid intake, family history, environmental factors, medical history (diabetes, hypertension, hyperparathyroidism, inflammatory bowel disease, bowel resection, Crohn's disease, UTIs), social history, review of systems, and surgical history. Next, the nurse counsels the patient on pertinent findings elicited during the comprehensive nursing assessment and provides followup counseling to support dietary and life-style changes and monitor outcomes and compliance. In our institution, a Kidney Stone Disease in Adults teaching pamphlet is given to patients at the time that teaching is initiated (see Figure 5). Specific dietary modification education includes encouraging reduced animal protein intake for elevated urinary sulfate. See Table 7 for more recommendations for patients with stones. Additional dietary recommendations may be found in Krieg (2005), in this issue of Urologic Nursing.
Click to zoom
(Enlarge Image)
Figure 5.
Kidney Stone Disease in Adults
Nursing Interventions
The nurse should:
Perform pain assessments to include Visual Analog, numerical, or Wong-Baker scales as appropriate for patient population to assess level of pain and effectiveness of outcome with pain interventions.
Provide pharmacological education. Narcotics are usually used liberally, such as parenteral (IM/IV) narcotics (ketorolac, [Toradol®], meperedine [Demerol®], morphine, and oral narcotics/analgesic combinations (Department of the Navy Bureau of Medicine and Surgery, 2004). Use of narcotic medication needs to be explained as well as side effects, such as nausea, vomiting, constipation, and caution with driving or operating machinery.
Review bowel patterns and suggest interventions to prevent constipation due to pain medication.
Assess contributing factors of dehydration such as nausea, vomiting, and diarrhea and administer antiemetics, such as metoclopramine (Reglan®), prochlorperazine (Compazine®), granisetron (Kytril®), or ondansetron (Zofran®). Administer antidiarrheal agents such as loperamide (Imodium®), diphenoxylate, atropine (Lomotil®), or paregoric and assess effectiveness of outcomes. If severe nausea and vomiting occur, patients must be aware that prevention of dehydration and electrolyte imbalance, may require IV hydration, prescription of anti-emetics, and solutions such as such as Gatorade® or Pedialyte® to replace electrolytes lost via the GI tract.
Assess for vital signs checking for orthostatic hypotension (lowering of blood pressure and increase in pulse with positional changes) and monitoring patient weights.
Encourage increases in daily fluid intake, especially water, and monitor outcomes of interventions through patient voiding history and 24-hour urine reports. The most important lifestyle change to prevent stones is drinking more fluids, especially water up to 2 quarts/day.
Educate the patient on completing a voiding diary to track daily urine output.
Educate the patient on the importance of completing laboratory tests ordered, especially 24-hour urines. This can become an imposition on the patient's quality of life, especially if he is active and working.
Educate the patient on collecting urine specimens and straining urine.
Educate the patient on diagnostic testing, including required dietary or bowel preparation to reduce anxiety.
Educate the patient on the importance of weight loss, maintaining weight loss, and daily exercise.
Provide counseling on health promotion and maintenance, stressing the importance of followup care to evaluate causes of stone formation in an effort to prevent future recurrences.
Preventative Health Maintenance/Lifestyle Changes
Effective kidney stone prevention depends upon the stone type and identifying risk factors for stone formation. An individualized treatment plan incorporating dietary changes, supplements, and medications can be developed to help prevent the formation of new stones. If kidney stones develop despite increasing fluid intake and making changes to diet, medications can be prescribed to help dissolve the stones or to prevent formation of new stones.
As a health care provider, it is imperative that causes of stone formation be investigated to prevent future occurrences that may lead to permanent kidney damage. Patient education and counseling are vital to effective care, and can be provided by the urologic nurse to promote lifestyle changes in this patient population. Weight management is a critical factor in managing stone formation and prevention of future occurrences as evidenced by a study at Brigham and Women's Hospital, Boston (Guttman, 2005). Researchers evaluated the correlation of obesity and weight gain and the risk of developing kidney stones. The findings indicated obesity was a contributing factor in stone development since, as we age, the majority of weight gain is from fat tissue not bone or muscle. The risk of developing stones increased by 71% to 109% among younger and older women in the highest weight,BMI, and waist circumference and 33% to 48% in men. These findings support the need for health care providers to emphasize the importance of exercise and weight management in a prevention program. Dietary recommendations for stone formers are discussed in detail by Krieg (2005).
Conclusion
With appropriate diagnosis and treatment of specific disorders resulting in nephrolithiasis, a remission rate greater than 80% can be obtained (see Table 8 ). In patients with mild to moderate severity of stone disease, virtually total control of stone disease can be achieved with a remission rate greater than 95% (Preminger, Harvey, & Pak, 1985). The need for surgical stone removal may be reduced dramatically or eliminated with an effective prophylactic program. Selective pharmacologic therapy also has the advantage of overcoming nonrenal complications and averting certain side effects that may occur with nonselective medical therapy. It is clear that selective medical therapy alone cannot provide total control of stone disease. A satisfactory response requires continued dedicated compliance by patients to the recommended program and a commitment of the physician to provide long-term followup and care with the intention of improving quality of life by eliminating the symptoms caused by urinary tract stones.
Top
From Applied Radiology
Chronic Kidney Disease: CT or MRI?
Sameh K. Morcos, FRCS, FFRRCSI, FRCR
Authors and Disclosures
Posted: 06/03/2008; Applied Radiology. 2008;37(5):19-24. © 2008 Anderson Publishing, Ltd.
Abstract and Introduction
Abstract
Contrast-induced nephrotoxicity and nephrogenic systemic fibrosis are complications that have been reported in certain patients following contrast-enhanced imaging. The author presents approaches to avoid these effects in patients with reduced renal function and suggests how to choose between contrast-enhanced CT or magnetic resonance imaging (MRI) in high-risk patients.
Introduction
Patients with reduced renal function are at risk of developing contrast-induced nephrotoxicity (CIN) following a contrast-enhanced computed tomography (CT) examination with an iodinated contrast agent[1] and at risk of developing nephrogenic systemic fibrosis (NSF) after a contrast-enhanced magnetic resonance imaging (MRI) examination with an extracellular gadolinium-based contrast agent.[2] This article will present an overview of these 2 adverse effects as well as approaches to avoid these complications. The choice between contrast-enhanced CT or MRI in this group of patients will be discussed.
Contrast-induced Nephrotoxicity
Contrast-induced nephrotoxicity implies that impairment in renal function (an increase in serum creatinine by more than 25% or 0.5 mg/dL) has occurred within 3 days following the intravascular administration of contrast and the absence of alternative etiology.[3]
Incidence of CIN After Intravenous Injection
The precise incidence of CIN after the intravenous (IV) administration remains unclear because of the small number of studies that have investigated this issue. According to a recent review, only 40 studies could be identified over the last 40 years that investigated CIN after IV administration of iodinated contrast media. In contrast, there were >3000 reports on CIN after intra-arterial administration of contrast media over the same period.[1] According to this review, the incidence of CIN after IV injection varied from 0 to 21%.[1] However, 1 study reported an incidence as high as 42% in patients with advanced renal impairment (serum creatinine >2.5 mg/dL) before the contrast injection.[1] An incidence between 5% and 10% might be expected in a group of patients with different degrees of renal impairment before IV contrast administration.[1]
Clinical Importance of CIN
The effect of the development of CIN after IV contrast administration on a patient´s morbidity and mortality is not clear and has not been adequately documented in the literature. However, CIN after intra-arterial administration is known to increase in-hospital morbidity and mortality.[4-6] Several reports have documented that CIN increases the incidence of nonrenal complications such as asepsis, lung infection, major adverse cardiac events, and delayed wound healing.[4,5] An increase in mortality among patients with CIN has also been documented.[6] It is more than likely that CIN that develops after IV contrast injection will have some deleterious effect, particularly on patients who suffer from advanced renal impairment (glomerular filtration rate [GFR] <30 mL/min) before contrast administration.
How to Reduce the Risk of CIN
In the author's opinion, the guidelines produced by the European Society of Urogenital Radiology (ESUR) in 1999 remain the most practical and effective approach to minimize the risk of CIN ( Table 1 ), despite the large number of recent publications in this field.[3,7] Patients with renal impairment in whom the administration of contrast is deemed necessary should receive the lowest possible dose of isosmolar or low-osmolar nonionic contrast and hydration (100 mL/hour) for at least 4 hours before and after contrast injection.[3,7] The effectiveness of the prophylactic use of nephroprotective drugs such as acetylcysteines remains uncertain, and consistent protection has not been proven in the reports investigating the usefulness of these drugs.[5,8]
Nephrogenic Systemic Fibrosis
This condition mainly affects patients with end-stage renal disease (ESRD). It was first reported in the literature in 2000 and was named nephrogenic fibrosing dermopathy.[2] However, it later became apparent that it is a multisystem disease and the fibrotic changes affect other organs such as lungs, heart, liver, and muscles, in addition to the skin. Hence, the name nephrogenic systemic fibrosis is now used instead of nephrogenic fibrosing dermopathy to reflect the multisystem nature of the disease.[2] In January 2006, an Austrian nephrologist reported 5 cases of NSF after contrast-enhanced MRI examination and, for the first time, suggested a possible causal relation between the use of gadolinium (Gd)-based contrast and NSF.[9] Since this publication, several reports have appeared in the literature that document the development of NSF in patients with advanced renal impairment following exposure to extracellular Gd contrast.[10-13]
Clinical Picture
Nephrogenic systemic fibrosis affects patients with advanced renal insufficiency, including those on dialysis. The disease has also been reported in patients suffering from hepatorenal syndrome and those requiring liver transplantation.[2,13] Most cases of NSF have developed following the administration of Gd contrast. In a very few cases, exposure to Gd-based contrast could not be confirmed. The disease is characterized by scleroderma-like skin lesions that can be painful and puritic. The skin changes may progress to cause flexion contractures at joints.
The skin lesions mainly affect the limbs and trunk but spare the head and neck. The fibrosis may also affect the liver, lung, heart, and muscles. The disease develops 24 hours to ?3 months after receiving Gd contrast. The dose of Gd contrast varied from 18 to 50 mL per examination. Some of the severe cases of NSF have been associated with multiple exposures to Gd contrast.[10-13]
Epidemiology
The incidence of NSF in patients with ESRD who were exposed to Gd contrast is approximately 5%.[2,13] However, the precise incidence and extent of the disease remains uncertain. Nephrogenic systemic fibrosis has been reported world-wide with no ethnic, age, or gender preference. The majority of cases (>90%) were associated with the use of the nonionic linear Gd contrast agent gadodiamide (Omni scan, GE Healthcare, Princeton, NJ). A few cases have been reported with the nonionic linear Gd contrast gadoversetamide (OptiMARK, Tyco Healthcare/Mallinckrodt, St. Louis, MO) and the ionic linear Gd contrast gadopentetate dimeglumine (Magnevist, Bayer Schering, Germany).[13] A mild case of NSF has been documented following multiple exposures to gadoteridol (ProHance, Bracco Diagnostics, Inc., Princeton, NJ).[14]
The Implication of the Epidemiology of NSF
The stability of a contrast agent reflects the ability of the chelate to retain the toxic gadolinium ion (Gd+++) in the molecule; strong binding between Gd+++ and the chelate indicates high stability. The stability of Gd contrast is likely to be an important factor in the pathogenesis of NSF, as the majority of cases were associated with the use of nonionic linear chelates that are the least stable molecules.[13] Only a single case of mild NSF has been reported with the macrocyclic agents that are more stable than the linear chelates.[15,16] No cases so far have been reported following the sole use of the most stable Gd contrast agent, the ionic macrocyclic chelate gadoterate meglumine (Dotarem, Guerbet, S.A., Paris, France).[13]
Factors That Determine the Stability of Gd Contrast
Shape (Linear or Cyclic). A macrocyclic chelate offers a better protection and binding to Gd+++ in comparison to the linear structure.[16]
Ionicity. Nonionic chelates are less stable than ionic ones. The replacement of a carboxyl group by a nonionic agent weakens the binding of the chelate to Gd+++, particularly in the nonionic linear molecule.[16]
Markers of Gd-contrast Stability. The following measurements are used in vitro to assess the stability of the Gd chelates: thermodynamic stability constant, conditional stability value, and dissociation half-life at pH 1.0. High values indicate high stability of the molecule.[15,16] The presence of a significant amount of excess chelate in the commercial preparation is an indirect marker of the instability of the molecule.[15,16] According to in vitro data, the least stable Gd chelates are the nonionic linear molecules. The commercial preparations of these molecules also contain the largest amount of excess chelates in comparison to other Gd contrast agents. The Gd contrast agent with the highest stability values and no excess chelates is the ionic macrocyclic preparation.[16] However, in vivo data that measures the amount of Gd retention in tissues ?7 days after IV administration of Gd contrast in animals with normal renal function as a marker of stability showed no significant difference in the retention of Gd among macrocyclic agents.[17,18]
Pathophysiology of NSF
Extracellular Gd contrast is eliminated from the body almost exclusively by the kidneys. In patients with renal impairment, the biological half-life is prolonged, which increases the possibility of transmetallation. In addition, molecules of low stability are prone to transmetallation with endogenous ions, leading to the release of free Gd.[16] Peripherally deposited Gd may act as a target for circulating fibrocytes, initiating the process of fibrosis. In addition, Gd in the tissues may cause the release of a variety of cytokines, particularly transforming growth factor beta (TGF ?), and activation of the enzyme transglutaminase 2 (TG2) that promotes fibrosis.[19,20] Recent studies have reported Gd deposition in skin biopsies of affected areas in patients with NSF.[21]
Important Risk Factors for NSF
Advanced renal impairment (GFR <15 mL/min), the dose and type of Gd contrast used (the use of large doses, particularly of linear nonionic agents), the multiple repeat administration of Gd contrast, the presence of proinflammatory conditions (particularly vascular complications), the administration of high doses of erythropoetin, and hyperphosphatemia (which increases the chance of retaining ionized Gd in tissues) all have been reported as risk factors for the development of NSF.[12,22]
How Can the Risk of NSF Be Reduced?
Patients with GFR <30 mL/min, including those on dialysis, should not receive nonionic linear chelates. The lowest possible dose of stable Gd contrast agents (macrocyclic chelates) should be used in these patients.[13] Contrast-enhanced MRI examination should be avoided whenever possible during pro-inflammatory events.[12] Although hemodialysis shortly after Gd contrast administration has not been shown to prevent NSF, patients on hemodialysis can be scheduled to have the dialysis session shortly after the MRI examination to reduce the Gd contrast load.[13] Patients on peritoneal dialysis are at particular risk, as the elimination of Gd contrast by peritoneal dialysis is rather slow. Continuous ambulatory peritoneal dialysis for 20 days eliminates only 69% of the injected dose.[23] Therefore, several rapid exchanges of the dialysis fluid should be encouraged after contrast-enhanced MRI examination to speed the elimination of Gd. The ESUR has recently published guidelines on reducing the risk of NSF ( Table 2 ).[13]
The Use of Contrast in Patients With Renal Impairment: Choosing CT or MRI
The following points should be considered in deciding whether a contrast-enhanced CT or MRI examination should be performed in a patient with reduced renal function.
1) Patients at high risk should be identified before contrast administration. Serum creatinine should be measured either routinely before contrast injection or selectively in patients with a history of renal disease, proteinuria, prior kidney surgery, hypertension, gout, or diabetes mellitus.[5,24,25] Serum creatinine can be used to determine the estimated glomerular filtration rate (eGFR) of the patient with the modification of diet in renal disease (MDRD) equation that is currently in wide use in many laboratories[24]:
eGFR <60 mL/min is a risk factor for CIN.
eGFR <30 mL/min is a risk factor for NSF.[13] 2) The contrast administration has to be deemed essential for the patient's management, and the potential risk must be weighed against the benefits.
3) Consideration should be given to imaging techniques that may offer the same diagnostic information without the need to administer iodinated or Gd contrast agents:
Ultrasound ± ultrasound contrast agents
Noncontrast MRI studies
CT without IV contrast
CO2 for angiography
Isotope imaging
4) Clear communication with the patient is important, particularly to explain the reason for the choice of the examination. The patient should also be involved in the decision-making process. Explain potential risks to the patient without being an alarmist.
5) Knowledge and clinical wisdom should help the radiologist in answering the following questions:
Which technique will offer the most accurate diagnostic information?
What is the likelihood and seriousness of the risk?
Do the clinical benefits justify the risk?
How can the risk be minimized?
Balancing Risk: CT and MRI
The chance of inducing CIN is much higher than of inducing NSF in patients suffering from renal impairment. The prevalence of CIN in patients with GFR <60 mL/min is approximately 10% after a contrast-enhanced CT examination[1,26] and increases to 30% to 40% in patients with GFR <30 mL/min.[1] On the other hand, NSF occurs mainly in patients with advanced reduction in renal function (GFR <30 mL/min), with an incidence of <5%.[2] In addition, all iodinated contrast agents have the potential to induce CIN, whereas NSF can possibly be prevented by using the lowest possible dose of a macrocyclic Gd contrast and avoiding repeat contrast administration within a short period of time.[13] The center that has reported the largest series of cases with NSF has not seen a single new case of NSF since they stopped using gadodiamide in March 2006 and switched to a macrocyclic MRI contrast.[26] In contrast, CIN cannot be completely avoided in spite of taking all necessary precautions.[5]
Though NSF is a serious complication with no effective treatment, CIN remains a source of concern because it also increases patient morbidity and mortality.[6] A recent study reported that 4.8% of patients who developed CIN after a contrast-enhanced CT examination then developed irreversible renal impairment.[26] The further reduction in renal function is bound to adversely affect the long-term outcome for these patients.[26]
Thus, considering the previously mentioned points, the balance of risk seems to be in favor of the use of contrast-enhanced MRI studies in patients with renal impairment. The incidence of NSF remains low, and the condition can be avoided by taking the correct precautions.[27]
Administration of Contrast to Patients on Dialysis
Patients on Hemodialysis
Patients on hemodialysis are at an increased risk of developing NSF. Therefore, all necessary precautions should be implemented in these patients if a contrast-enhanced MRI examination is deemed necessary.[2,13] In contrast, CIN is irrelevant in hemodialysis patients, as the kidneys are already extensively damaged with no important residual renal function to protect. The administration of iodinated contrast agents to these patients usually has no important clinical consequence.[23]
Patients on Peritoneal Dialysis
These patients are particularly at extra risk and require careful assessment and wise judgment in considering the use of contrast agents. Protecting residual renal function is clinically important, and therefore, CIN is better avoided. They are at increased risk of NSF because the prolonged half-life of Gd contrast increases the possibility of transmetallation and release of free Gd ions.[13]
Conclusion
A contrast-enhanced MRI examination in patients with renal impairment is probably safer than contrast-enhanced CT, providing that the examination is essential for the patient's management and that all necessary precautions have been implemented. The possibility of inducing NSF might be eliminated with the careful selection of the Gd contrast to be administered, by avoiding large contrast doses, and by preventing multiple repeat contrast administrations.
Gouty Arthritis Treatment Complicating Anticoagulation
From Medscape Rheumatology > Ask the Experts > Rheumatoid Arthritis and Related Conditions
Gouty Arthritis Treatment Complicating Anticoagulation
Robert Terkeltaub, MD
Authors and Disclosures
Posted: 01/08/2002
Question
A 55-year-old woman with gouty arthritis is receiving anticoagulant therapy (nicoumalone 2 mg daily) because of mitral valve replacement. Her serum uric acid level was 11 mg/dL. After remission she took allopurinol 150 mg daily but it interfered with her international normalized ratio (INR).
1. Should this patient be treated for her hyperuricemia?
2. Can we use probenecid instead of allopurinol? Is there any harmful drug interaction between probenecid and nicoumalone?
3. If the patient uses probenecid and an anticoagulant, will she be able to take other medications, eg, antibiotics, nonsteroidal anti-inflammatory drugs, diuretics, digitalis, multivitamins, etc, if needed?
4. Are there any alternative drugs to decrease serum uric acid without interfering with the INR?
Benyamin Lukito, MD
Response from Robert Terkeltaub, MD
With regard to the first question, the indications for uric acid-lowering treatment of gout are: (a) the presence of visible subcutaneous tophi or radiographically detectable tophi or gouty joint erosions; (b) documented overproduction of uric acid; (c) frequent gouty attacks (usually defined as >3 per year or a rate that is increasing substantially in frequency); and (d) occurrence of gouty attacks that are difficult to manage (eg, polyarticular gout or severe gouty attacks in the presence of marked renal insufficiency).[1,2]
With respect to the other questions, use of either allopurinol or the uricosuric drugs probenecid and sulfinpyrazone can interfere with oral anticoagulant therapy. A variety of other significant drug interactions need to be watched for when using probenecid, including changes in the blood levels of certain antibiotics and interference of uricosuric efficacy by acetylsalicylic acid.[1,2] Even the useful uricosuric benzbromarone (which is not available in the United States) can interfere with oral anticoagulant therapy.[3] Attention to potentially remediable hyperuricemia-promoting factors in the patient, such as the degree of renal failure, hypertension, diuretic therapy, alcohol intake, diet, and obesity[1,2] is advised.
Top
New Guidelines for Managing Hypercholesterolemia
From Journal of the American Pharmacists Association
New Guidelines for Managing Hypercholesterolemia
James M. McKenney
Authors and Disclosures
Posted: 07/01/2001; J Am Pharm Assoc. 2001;41(4) © 2001 American Pharmaceutical Association
Abstract and Introduction
Abstract
Objective: To summarize for pharmacists the Adult Treatment Panel III (ATP III), recently issued guidelines for managing hypercholesterolemia, from the National Cholesterol Education Program (NCEP).
Data Sources: Executive summary of ATP III, and other pertinent literature as determined by the author.
Study Selection: Not applicable.
Data Extraction: By the author.
Data Synthesis: Like previous guidelines issued by NCEP, ATP III focuses on lowering of low-density lipoprotein cholesterol (LDL-C) as a primary focus and using exercise, diet, and pharmacotherapy as a primary means of lowering patients' coronary heart disease (CHD) risks. The new guidelines recognize LDL-C levels of less than 100 mg/dL as optimal for all patients, and increase attention on high triglyceride levels (above 200 mg/dL). ATP III places more emphasis on identifying patients at risk for CHD and CHD events (e.g., myocardial infarctions, revascularization procedures). To apply the recommendations of ATP III in pharmaceutical care practice, pharmacists should follow a six-step process: (1) Assess the patient's lipid profile (full panel, not just total cholesterol); (2) assess and categorize the patient's CHD risk (using a point system reflecting the levels of risk inherent in certain factors); (3) establish treatment goals and approaches (the greater the risk, the more aggressive the management); (4) initiate therapeutic lifestyle changes (including new recommendations for low intake of saturated fats and dietary cholesterol); (5) initiate LDL-C lowering drug therapy (often with combination therapy); and (6) consider other lipid factors (particularly hypertriglyceridemia and the metabolic syndrome).
Conclusion: Most patients who begin lipid-lowering therapy stop it within 1 year, and only about one-third of patients reach treatment goals. The release of the ATP III guidelines provides pharmacists a great opportunity to enhance pharmaceutical care services directed specifically at patients with hyperlipidemia.
Introduction
Without interventions or improvements in care, coronary heart disease (CHD) will cause the death of about one-half of all Americans living today. One of the key steps in reducing CHD risk, many think, is lowering blood cholesterol levels. Since 1988 the National Cholesterol Education Program (NCEP)[1-3] has provided guidelines to health professionals on how people can best lower their cholesterol levels and, thereby, their risk of cardiovascular complications and death. The third iteration of the NCEP's guidelines, the Adult Treatment Panel III (ATP III), was released in May of this year.
The new guidelines seek to prevent or delay CHD events such as myocardial infarctions (MIs), revascularization procedures (i.e., angioplasty and bypass surgery), and acute coronary syndromes by modifying abnormal blood lipid levels (see Table 1 ). In the new guidelines, emphasis has been placed on more accurately identifying patients who have a high risk of a CHD event and matching the intensity of treatment to each patient's level of risk: the higher the CHD risk, the lower the low-density lipoprotein cholesterol (LDL-C) treatment goal and the more aggressive the treatment.
In this article, ATP III recommendations are described and suggestions are offered on how pharmacists can best use them in their daily practices. Pharmacists should think of a six-step process when applying the new guidelines in their pharmaceutical care services:
Assess the patient's lipid profile.
Assess and categorize the patient's CHD risk.
Establish treatment goals and approaches.
Initiate therapeutic lifestyle changes.
Initiate LDL-C lowering drug therapy.
Consider other lipid factors.
Step 1: Assess the Patient's Lipid Profile
All adults over the age of 20 should have a 12-hour fasting lipid profile performed every 5 years. The full panel profile, which includes both total cholesterol and the major subcomponents, is needed to properly evaluate and treat patients.
The key measure in this profile is LDL-C. An elevated LDL-C level is a major cause of CHD. Treatments that reduce LDL-C have been shown to reduce CHD risk by 25% to 45% over 5 years (and possibly by twice as much in 10 years). The CHD risk associated with LDL-C is graded: the higher the LDL-C level, the greater the CHD risk. A new classification scheme for LDL-C is based on this graded relationship with risk (see Table 2 ).
The concentration of high-density lipoprotein cholesterol (HDL-C) is inversely related to CHD risk: the lower the HDL-C level, the higher the risk. HDL-C levels below 40 mg/dL are now classified as low. These levels are generally associated with a higher CHD risk. An HDL-C level of 60 mg/dL or above is classified as high and is associated with a lower CHD risk.
ATP III also provides a new classification system for triglycerides, one that recognizes the importance placed on this component of the lipid panel (see Table 3 ). Evidence is accumulating that an elevated triglyceride level is an independent predictor of CHD risk. Generally, levels in the borderline high and high range (i.e., 150 mg/dL to 500 mg/dL) are associated with increased CHD risk. These levels reflect the presence of triglyceride-rich lipoproteins (i.e., remnant very-low-density lipoproteins [VLDL] and intermediate-density lipoproteins). However, it is the cholesterol -- and not the triglycerides -- in these particles that contributes to the increased CHD risk. This is described more fully in the discussion about the metabolic syndrome under Step 6. Very high triglyceride levels (i.e., > 500 mg/dL) indicate the presence of chylomicron in addition to VLDL particles. Patients with very high triglyceride levels, especially those whose triglycerides exceed 1,000 mg/dL, are at increased risk for pancreatitis.
Step 2: Assess and Categorize the Patient's CHD Risk
Once the fasting lipoprotein profile has been obtained and assessed, a history of clinical CHD events and CHD risk factors should be obtained. With this information, the patient can be classified into one of three risk categories as presented in Table 4 .
CHD or a CHD risk equivalent.
Two or more CHD risk factors.
Zero or one risk factor.
CHD or CHD Risk Equivalent
Patients who have experienced a CHD event have the highest risk of experiencing another event, a risk that exceeds 20% over 10 years. The exact level of the risk depends on the patient's cholesterol level, presence of other CHD risk factors, genetic predisposition, lifestyle, and treatment. In practice, the CHD patient can be identified by the presence of one or more of the following:
Signs and symptoms of stable angina pectoris.
History of an MI.
Evidence of a silent MI or myocardial ischemia.
History of unstable angina.
Revascularization procedures such as coronary bypass surgery and angioplasty.
ATP III has increased the number of patients who fit into this category by adding the CHD risk equivalent patient. As the name implies, patients with a CHD risk equivalent have the same level of CHD risk (i.e., > 20% in 10 years), but have not yet experienced a CHD event. Following is a description of the three CHD risk equivalent patient groups.
Patients with Other Forms of Atherosclerotic Vascular Disease
Patients who have clinical evidence of atherosclerosis in other vascular beds have a similar CHD risk. This includes patients with:
Peripheral vascular disease.
Abdominal aortic aneurysm.
Symptomatic carotid artery disease (i.e., thrombotic stroke or transient ischemic attacks).
Men and women with peripheral artery disease (PAD) and an ankle/brachial blood pressure index (ABI) of < 0.9 have a risk of 2.4% to 2.9% per year of having a CHD event (i.e., 24% to 29% in 10 years). The risk approaches 40% in 10 years in PAD patients with an ABI of < 0.7. Similarly, patients who have stenotic lesions in their carotid artery have a CHD event risk of 1.4% to 8.3% per year (14% to 83% in 10 years). Patients with an abdominal aortic aneurysm have a CHD risk of 19% in the next 10 years.
Patients with Type 2 Diabetes
Patients with type 2 diabetes are at risk for developing microvascular complications such as retinopathy, nephropathy, and gastroenteropathy, and these conditions complicate their lives greatly. But most patients with type 2 diabetes die from macrovascular disease -- that is, a CHD event. Most type 2 diabetic patients have more than a 20% risk of experiencing a CHD event in the next 10 years. The 10-year risk in diabetic patients who have already experienced a CHD event is much higher, approaching 50%. Young patients with type 2 diabetes probably have less than a 20% 10-year risk, but their lifetime risk is disproportionately large, justifying their classification into a CHD risk equivalent category.
A word of caution is appropriate here. The classification of type 2 diabetics as CHD risk equivalents is based mostly on observational data. There are no controlled clinical trials that demonstrate the benefit of LDL-C lowering in type 2 diabetic patients. Controlled clinical trials with hypoglycemic therapy have shown fewer microvascular complications (such as retinopathy and neuropathy) but not macrovascular disease (such as MI). Post hoc analyses of controlled clinical trials of lipid-lowering therapy in hypercholesterolemic patients have revealed fewer CHD events in patients with diabetes who were treated with LDL-C lowering therapy. Observational data showing a link between the presence of diabetes and CHD events are substantial and consistent, but confirmation of a benefit will have to await the results of several ongoing controlled trials in patients with diabetes.
Patients with type 1 diabetes appear to have a similar high risk of CHD, but because less information is available on the link with cholesterol levels, clinical judgment must be exercised in the management of lipid levels in these patients.
Patients with Global Risks Exceeding 20% in 10 Years
The final patient group assigned CHD risk equivalent status are those whose estimated CHD risk is 20% or more in 10 years based on a global risk assessment. This will be discussed further below.
Patients with Two or More CHD Risk Factors
Patients who have not experienced a CHD event and do not have a CHD risk equivalent but do have two or more CHD risk factors are assigned to an intermediate risk category. The list of risk factors for making this determination is presented in Table 5 and reviewed below. Note that the list does not include diabetes, which is now recognized as a CHD risk equivalent.
Age
In many respects, age is the most potent CHD risk factor. CHD risk increases sharply with advancing age such that age may be considered a surrogate marker for atherosclerotic disease. The longer a person lives, the longer his or her cumulative exposure to cholesterol deposition and other risk factors, and the greater his or her development of atherosclerosis. This perspective supports lipid-modifying treatment in both young adults to retard the progression of atherosclerotic disease and older adults to prevent disabling events.
Sex
At any age, men have a higher CHD risk than women, although ultimately CHD is the most common cause of death and disability for both groups. CHD events in women lag behind those of men by 10 to 15 years. The risk for CHD in men becomes significant in the mid-40s and in women after menopause (i.e., about 52 years of age). Before menopause, CHD risk for women is very low.
Family History of Premature CHD
Family history is considered positive when CHD events are documented in first-degree male relatives younger than 55 years of age or first-degree female relatives younger than 65 years of age. Many clinicians believe that a strong family history of premature CHD events is one of the most important ways to identify patients with a high CHD risk. Quite often, a family history of premature CHD is also accompanied by a family history of CHD risk factors, and this may be the way risk is actually transmitted.
Cigarette Smoking
Current cigarette smoking is a powerful predictor of CHD risk. This risk is proportional to the degree of smoking. When a patient stops smoking, CHD risk drops quickly (within months) and ultimately may drop as much as 45%.
Hypertension
Blood pressure at or above 140/90 mm Hg or current antihypertensive treatment defines this risk factor. The association between hypertension and CHD is also powerful. With effective treatment, CHD risk is reduced, but not to baseline. This is why a diagnosis of hypertension per se, even if blood pressure is effectively controlled with drugs, is identified as a CHD risk factor for the purposes of determining the need for lipid treatments.
Low HDL-C
An HDL-C level of less than 40 mg/dL is now considered a CHD risk factor; previously, the value was less than 35 mg/dL. HDL-C is inversely related to CHD risk. Low HDL-C is often a marker for other risk factors, including increased remnant lipoproteins; small, dense LDL-C; obesity; insulin resistance; diabetes; physical inactivity; and genetic disorders. Low HDL-C is a particularly powerful risk factor for women. HDL-C is involved in reverse cholesterol transport and is necessary for removing cholesterol deposited in extrahepatic tissue from the body. The average HDL-C level in men is about 45 mg/dL, whereas it is about 55 mg/dL in women. This difference is used when making a diagnosis of low HDL-C in the metabolic syndrome (see Step 6 below). However, when counting risk factors for the purpose of setting the LDL-C goal, the definition of low HDL-C is < 40 mg/dL for both men and women.
Other Risk Factors
A few other factors also raise a patient's CHD risk and should be assessed when evaluating patients, but these do not alter LDL-C treatment goals. Three of these "other" risk factors are obesity, physical inactivity, and an atherogenic diet. In some patients, a poor lifestyle -- especially consuming a high saturated fat, atherogenic diet -- plays a major role in causing the lipid disorder, and following a low-fat diet and increasing physical activity can often correct the problem.
Overweight or Obesity
Patients with body mass indices (BMIs) between 25 kg/m[2] and 29.9 kg/m[2] are considered overweight, and those with BMIs of 30 kg/m[2] or more are considered obese. When assessing the dyslipidemic patient, it is where the extra weight is that matters. If it is located primarily around the waist, the risk of CHD is particularly high. In practice, a handy way to identify increased CHD risk associated with weight is to record a waist circumference. One that is more than 40 inches for a man or more than 35 inches for a woman indicates increased risk. Patients with abdominal obesity often have the metabolic syndrome (see Step 6 below) and a 10-year CHD risk in excess of 20% because of the presence of multiple risk factors. Weight reduction can mitigate much of this risk.
Physical Inactivity
Physical inactivity, or more precisely, lack of physical conditioning, increases the risk of CHD. Physically fit but overweight individuals are reported to have a CHD risk similar to that of people with no other CHD risk factors, which illustrates the importance of physical activity in modulating CHD risk. Physical inactivity is often associated with other CHD risk factors, including low HDL-C, increased remnant particles, insulin resistance, and high blood pressure. For these reasons, increasing physical activity is a fundamental approach for treating dyslipidemic patients.
Additional Assessments
Literally dozens of other potential CHD risk factors have been proposed. Many of these are being evaluated to determine how much, if any, additional predictive value they provide over LDL-C and the other traditional risk factors. Until this research is complete, it is best to use these measures sparingly in the routine assessment of patients. Occasionally, they may be useful when a traditional assessment leaves questions about a patient's true risk. For example, patients with a moderately elevated LDL-C but a strong family history of premature CHD, or patients who have experienced an MI but have near optimal or optimal LDL-C levels, may be candidates for an assessment of these emerging risk factors. In those cases, the assessments may offer additional information about CHD risk for an individual patient and suggest the need for more aggressive or less aggressive therapy. Some of these emerging risk factors are:
Lipoprotein(a) (Lp[a]).
Homocysteine.
Small, dense LDL.
Apolipoprotein B.
In addition to these risk factors, other assessments can be performed to help uncover the presence of atherosclerosis and thereby identify patients at high risk for a CHD event. These assessments may also be most helpful when the clinician suspects a higher risk than is evident from a traditional assessment. For example, a person with a strong family history of premature CHD events but few CHD risk factors and no history of CHD or a CHD equivalent may be a candidate for one or more of these tests. The attractive thing about these assessments is that they are all noninvasive. However, they are not available in all communities, and many are too expensive to recommend for widespread use. Some of these assessments are discussed briefly below.
Exercise Electrocardiogram
The exercise electrocardiogram (ECG) is used to detect ischemia in patients with flow-limiting coronary stenosis. Patients with a suspicious history of exercise-associated chest discomfort may be candidates for an exercise ECG. If the test suggests the presence of atherosclerotic disease, the patient may be considered as a CHD equivalent or treated as such.
Ankle/Brachial Blood Pressure Index
The ABI is simple, widely available, inexpensive, and noninvasive, and it can confirm the diagnosis of PAD. An ABI of less than 0.9 is an indication of PAD. A low ABI identifies a patient with a CHD equivalent risk.
B-Mode Ultrasound
B-mode ultrasound is also a relatively inexpensive, safe, commonly available, and noninvasive way to determine the thickness of the intimal-medial lining of carotid, aortic, and femoral arteries. The presence of intimal-medial thickening in a carotid artery, for example, suggests the presence of atherosclerosis, which predicts an increased risk of a transient ischemic attack, stroke, and coronary event. If atherosclerosis is detected in a carotid artery, it is also likely to be present in coronary vessels.
Electron Beam Computed Tomography
Electron beam computed tomography (EBCT) has limited utility because it is not available in many communities and because the information it provides is still being evaluated. EBCT detects the presence of calcium within the coronary arterial wall. A high calcium volume score is assumed to indicate the presence of older atherosclerotic plaque (old enough for calcium to be deposited) and suggests the presence of younger, cholesterol-enriched, vulnerable plaques (without calcium deposits) that increase CHD risk. Currently, there is no proof that treating patients with a high calcium volume score with lipid-modifying therapy will reduce CHD events.
High-Sensitivity C-reactive Protein
There has been a great deal of interest in high-sensitivity C-reactive protein (hs-CRP). It is a marker of inflammation and, by inference, of atherogenesis. Kits to perform tests for hs-CRP levels are now available in most community laboratories. A high hs-CRP level has been shown to predict future coronary events in a variety of populations and correlates with CHD risk reduction with lipid-lowering treatment. A high hs-CRP level appears to offer incremental information about a patient's future CHD risk over that predicted by traditional risk factors. What is not currently known is whether treating patients with elevated hs-CRP is associated with fewer CHD events.
Global Risk Assessment
For patients who have two or more CHD risk factors, ATP III has added a further assessment step to define CHD risk more sharply and allow better targeting of lipid-modifying treatment. The tool used for this assessment is a scoring system that estimates absolute 10-year CHD risk (CHD death or nonfatal MI) built from the Framingham database (see Appendix). Included in this assessment is the same risk factor information required for the initial classification of patients ( Table 5 ). Family history is not included in global risk assessment, as it adds little additional precision to the assessment and because it is accounted for through the presence other risk factors. Based on the global risk assessment, patients are placed into one of three categories:
20% 10-year risk (CHD risk equivalent).
10% to 20% 10-year risk.
< 10% 10-year risk.
Patients with a 20% or greater 10-year risk are considered CHD equivalents.
Fewer Than Two CHD Risk Factors
The final risk assessment category is for those with fewer than two CHD risk factors ( Table 4 ). These patients almost always have a 10-year CHD risk of less than 10%, making the use of lipid-lowering medications cost-prohibitive. A therapeutic lifestyle change program is recommended for these patients (see Step 4 below).
Step 3: Establish Treatment Goals and Approaches
A basic principle of cholesterol management is that the intensity of treatment should be matched to the level of CHD risk. This approach governs the goals and therapies recommended by ATP III to reduce CHD risk (see Table 6 ).
CHD or CHD Risk Equivalent
The primary treatment goal for individuals with established CHD or CHD risk equivalent is to achieve the optimal LDL-C level (i.e., < 100 mg/dL) ( Table 6 ). In all cases, lifestyle modification should be initiated. If needed, dietary adjuncts, such as stanol/sterol esters or viscous fiber, may be added to intensify LDL-C lowering. In addition, management of other CHD risk factors is needed.
For CHD or CHD risk equivalent patients with a baseline LDL-C of 130 mg/dL or more, most authorities advise that LDL-C-lowering drug therapy be started simultaneously with lifestyle modification ( Table 6 ).
For CHD or CHD risk equivalent patients with a baseline LDL-C of between 100 mg/dL and 129 mg/dL, several options are available. Most authorities favor the initiation of LDL-C lowering drug therapy and/or the intensification of lifestyle changes to achieve the LDL-C goal. However, because one study demonstrated CHD risk reduction with gemfibrozil in patients with LDL-C concentrations in this range, some authorities believe that treatment with niacin or a fibrate should be considered when LDL-C levels are in this range, especially if the patient has triglyceride levels above 200 mg/dL or HDL-C concentrations below 40 mg/dL.
For CHD or CHD risk equivalent patients with baseline LDL-C concentrations of less than 100 mg/dL, lifestyle modification is indicated. Some authorities reason that the presence of CHD in these patients suggests that the LDL-C level is too high no matter what it is; these specialists advocate LDL-C lowering therapy for all CHD patients. However, no clinical trial evidence currently exists showing that using drug therapy to lower LDL-C levels in patients with levels already below 100 mg/dL offers benefits. Until this information is available, clinical judgment should guide the use of lipid-lowering agents in these patients. Testing these individuals for the presence of one or more of the emerging risk factors (see Additional Assessments above) may provide additional information to guide treatment choices.
Moderate-Risk Patients
The goal of treatment for patients with two or more risk factors and 10-year risk less than 20% is to achieve LDL-C levels below 130 mg/dL ( Table 6 ). Lifestyle modification should be attempted first. As with CHD patients, dietary adjuncts may be added to a low-fat diet if needed, and other risk factors should be managed in these patients.
For patients with two or more risk factors and a 10-year CHD risk between 10% and 20%, cholesterol-lowering drug therapy may be considered if LDL-C below 130 mg is not achieved with lifestyle changes in 3 months ( Table 6 ).
In patients with two or more risk factors and a 10-year risk of less than 10%, emphasis should be placed on reducing long-term risk with lifestyle modification. If the LDL-C is above 160 mg/dL after an adequate trial of diet and exercise for at least 3 months, consideration may be given to initiating drug therapy ( Table 6 ).
Low-Risk Patients
These patients have zero or one risk factor and an LDL-C goal below 160 mg/dL. They have a low short-term CHD risk, making treatment with drug therapy not generally cost-effective. However, some of these patients will have a high long-term risk, making them candidates for more aggressive lipid-modifying therapy. Examples of high long-term risk patients are those with one of the following:
A single but severe risk factor (e.g., strong family history).
An emerging risk factor (if measured) (e.g., Lp[a], homocysteine; see Additional Assessments above).
10-year risk assessment approaching 10%.
LDL-C > 190 mg/dL after an adequate trial of lifestyle change.
Step 4: Initiate Therapeutic Lifestyle Changes
Reduction in CHD risk begins with the adoption of a healthy lifestyle. ATP III has recommended a plan to achieve a healthy lifestyle, which it calls "therapeutic lifestyle changes" or "TLC." In many patients, TLC is the only approach required to achieve LDL-C goals. In most patients, it is implemented before initiating drug therapy. In high-risk patients, drug therapy may be initiated simultaneously with TLC. Wherever possible, ATP III believes that referral should be made to a registered dietitian or other qualified nutritionist for instruction and guidance on TLC. Components of TLC include:
Reduced intake of LDL-raising nutrients: -- Saturated fats (< 7% of total calories). -- Dietary cholesterol (< 200 mg/day).
Dietary adjuncts for enhancing LDL lowering: -- Plant stanol/sterol-containing margarines (2-3 g/day). -- Viscous (soluble) fiber (e.g., barley oats, psyllium, apples, bananas, berries, citrus fruits, nectarines, peaches, pears, plums, prunes, broccoli, Brussels sprouts, carrots, dry beans, peas, soy products; 10-20 g/day).
Desirable weight maintenance or reduction if overweight.
Regular physical activity to expend at least 200 calories per day.
Other components of the TLC diet are displayed in Table 7 . As noted here, monounsaturated fats can provide up to 20% of the daily caloric intake and account for most of the total fat consumed. Monounsaturated fats, derived mostly from olive oil, canola oil, and fish products, help lower LDL-C concentrations. They are also a rich source of omega-3 fatty acids, a common component of the Mediterranean diet that has been associated with a reduction in CHD events in several large, controlled studies. Carbohydrates make up the major source of daily calories, but should be mostly derived from foods rich in complex carbohydrates, such as grains, especially whole grains, fruits, and vegetables, and not via the high-sugar, high-calorie, "fast food" snacks so commonly available in the food supply in the United States.
The steps to follow in implementing TLC are displayed in Figure 1. When possible, TLC should be initiated before drug therapy. High-risk patients, especially those hospitalized for an acute CHD event, generally do better if TLC and drug therapy are initiated together before discharge. For patients with no evidence of CHD or a CHD equivalent, a minimum of 12 weeks is generally required to fully implement the TLC diet and other lifestyle changes. The patient should be given 6 weeks to adopt the diet and physical activity before returning for the first follow-up appointment. At this visit, the TLC diet may be intensified, adjunct therapies may be added, and a program of physical activity may be developed. During the next follow-up visit, typically 12 weeks after starting the TLC program, drug therapy may be started if the patient is not at the LDL-C goal.
Click to zoom
(Enlarge Image)
Figure 1.
Steps in Implementing Therapeutic Lifestyle Changes (TLC)
Step 5: Initiate LDL-C Lowering Drug Therapy
The first goal of lipid-modifying drug therapy is to lower the LDL-C to goal. Statins are the preferred way to accomplish this because they are highly effective in lowering LDL-C and are very safe. If statins cannot be used because of patient intolerance or contraindications, other LDL-C lowering agents (i.e., bile acid resins [BARs] or nicotinic acid) should be used with the TLC diet to achieve treatment goals. If monotherapy with either of these agents is not successful in achieving treatment goals, combinations of LDL-C lowering drugs may be used. For example, a statin and a BAR, or niacin and a BAR, are effective regimens for lowering LDL-C levels.
Patients should generally return for a follow-up visit after being on drug therapy for 6 weeks (see Figure 2). This is sufficient time to see the full effects of the drug. If the treatment goal has not been achieved at this visit, statin (or other) treatment may be intensified or combination therapy with statins and a BAR or niacin may be prescribed. A second follow-up visit may occur in another 6 weeks. At each visit, patient adherence to TLC and drug therapy should be evaluated and appropriate steps taken to address any problems.
Click to zoom
(Enlarge Image)
Figure 2.
Steps in Initiating Lipid-Modifying Drug Therapy
Statins
Six statins are currently available: atorvastatin (Lipitor -- Pfizer; Parke-Davis), cerivastatin (Baycol -- Bayer), fluvastatin (Lescol -- Novartis), lovastatin (Mevacor -- Merck), pravastatin (Pravachol -- Bristol-Myers Squibb), and simvastatin (Zocor -- Merck). These statins differ somewhat in their LDL-C lowering efficacy. Generally, the greater the LDL-C lowering efficacy of the statin, the more patients will achieve treatment goals. Lovastatin, pravastatin, and simvastatin have demonstrated CHD risk reduction of 25% to 45% with 5 years of treatment in randomized, placebo-controlled clinical trials. Fluvastatin has demonstrated CHD risk reduction in an angiographic trial, and atorvastatin has demonstrated risk reduction in patients with acute coronary syndromes. Most authorities believe that these effects represent a "class" effect and that all statins (as well as all methods of lowering LDL-C for that matter) reduce CHD risk.
Statins reduce LDL-C concentrations by 18% to 55% (see Table 8 ). Most of this reduction is seen with the initial dose; further LDL-C reduction of 6% to 7% is seen each time doses are doubled thereafter. Maximal statin effect is usually obtained 4 to 6 weeks after initiating therapy or changing doses. Statins may be initiated with traditional starting doses and titrated up as needed to achieve greater LDL-C lowering effects. Alternatively, the dose needed to reduce LDL-C to the treatment goal may be used initially and then adjusted up or down as needed during follow-up visits.
Statins may increase transaminase enzymes, suggesting hepatotoxicity, and may cause myopathy, characterized by muscle soreness or weakness and an elevated creatine phosphokinase level of 10 times the upper limit of normal. Fortunately, liver enzyme changes documented during consecutive visits are generally seen in less than 1% of patients, quickly return to normal when the statin is withdrawn, and have not been associated with life-threatening problems such as liver failure or need for hepatic transplant. Myopathy, as defined above, occurs in less than 2 out of every 1,000 patients treated with a statin and dissipates completely and without permanent sequelae when the statin is withdrawn. About 5% to 8% of patients are intolerant of statins, experiencing a variety of symptoms including headache, muscle pain, and gastrointestinal symptoms.
Bile Acid Sequestrants or Resins
Cholestyramine (Questran -- Bristol-Myers Squibb or generic) colestipol (Colestid -- Pharmacia or generic), and colesevelam (Welchol -- Sankyo) are the BARs currently available. Their major pharmacologic effect is to lower LDL-C levels. They have been shown to reduce CHD events in hypercholesterolemic patients when evaluated in controlled clinical trials ( Table 8 ).
When used alone, BARs reduce LDL-C levels by 10% to 30%. When given with a statin, their effect is additive. They are generally dosed two to three times a day.
One of the advantages of BARs is their lack of systemic absorption. Thus, they may be useful in the treatment of patients in whom low systemic exposure is desired, such as young patients who face years of lipid-modifying therapy and women who are attempting to become pregnant.
The older BARs cause gastrointestinal intolerance, including bloating, gas, abdominal pain, and constipation. The older BARs also interfere with the absorption of some drugs (e.g., digoxin, thyroxine, iron, fat-soluble vitamins, and warfarin), necessitating their administration 1 hour before or 4 hours after the BAR is given.
Colesevelam, the most recently marketed BAR, appears to be well tolerated and has few drug interactions.
Nicotinic Acid
Nicotinic acid or niacin lowers LDL-C and triglyceride levels. It is the most effective drug available for raising HDL-C. Niacin has also been shown to reduce recurrent MI and total mortality in a controlled clinical trial.
Nonprescription immediate-release niacin reduces LDL-C levels by an average of 20% to 25% when dosed to 3 grams daily. Niaspan (KOS), an extended-release prescription product, will lower LDL-C concentrations by 15% to 20% at its maximum dose of 2 grams daily. At doses as low as 1 gram daily, either niacin product raises HDL-C levels by 15% to 30% and reduces triglyceride concentrations by 20% to 35%. Niacin is one of the few drugs that lowers Lp(a) concentrations, and it does so by up to 30%. But the clinical relevance of this effect is not known.
The major limitation to niacin is its side effects, including flushing with both immediate- and extended-release products. This effect can be minimized by having the patient take an aspirin tablet 30 minutes before the morning dose of immediate-release niacin or the bedtime dose of Niaspan.
Nonprescription sustained-release niacin products are not FDA-approved for the treatment of hyperlipidemia, nor have they met FDA standards for good manufacturing practices. Thus, products may vary considerably in their absorption and distribution characteristics, making dosing unreliable. Sustained-release niacin has also been associated with severe liver toxicity when given in doses above 2 grams daily.
Combination Drug Therapy
The mere fact that more patients will qualify for aggressive treatment to an LDL-C below 100 mg/dL with the new guidelines means that combination therapy will be used more often. LDL-C lowering can be enhanced by combining two or more LDL-C lowering drugs, such as a statin with either a BAR or niacin, a BAR with niacin, or a statin-BAR-niacin combination. Adding a BAR or niacin to a low-dose statin regimen generally produces a LDL-C reduction similar to that achieved by quadrupling the statin dose.
Step 6: Consider Other Lipid Factors
Metabolic Syndrome
Once the LDL-C goal has been achieved, the next step is to determine whether the patient also has other lipid risk factors that increase CHD risk. One commonly encountered problem is the metabolic syndrome. These patients have atherogenic dyslipidemia characterized by borderline high to high triglyceride levels (i.e., 150 mg/dL to 500 mg/dL, indicative of increased levels of triglyceride-rich remnant lipoproteins), low HDL-C (< 40 mg/dL), and increased small, dense LDL. They also have a constellation of risk factors including some or all of the following: excess body fat distributed mostly around the abdomen, insulin resistance with impaired fasting glucose or diabetes, elevated blood pressure, a proinflammatory state, and a prothrombotic state.
In patients with the metabolic syndrome, any elevation in the LDL-C level, even one just above the optimal level, accentuates their CHD risk. Often, these patients have a 10-year CHD risk of more than 20%, which would qualify them for a CHD risk equivalent designation. Most patients with type 2 diabetes have the metabolic syndrome. Despite growing awareness of this syndrome, office-based diagnosis has been difficult. ATP III has corrected this problem with an easy-to-follow approach to diagnosis of the metabolic syndrome (see Table 9 ).
Weight loss and increased physical activity are the two primary interventions used in treating the metabolic syndrome. These alone can correct the problem. Additionally, high blood pressure should be reduced, and CHD patients should be given an adult aspirin daily (some authorities recommend aspirin prophylaxis for primary prevention patients as well, but proof of its value for these patients is not available). If triglyceride levels remain high and/or HDL-C levels remain low, drug therapies may be considered. Two therapies to consider for this purpose are niacin and fibrates.
Fibrates
Three fibrates are currently available in the United States: gemfibrozil (Lopid -- Pfizer or generic), fenofibrate (Tricor -- Abbott), and clofibrate (Atromid-S -- Wyeth-Ayerst; rarely used now because of its toxicity). Fibrates are primarily used for lowering triglyceride and raising HDL-C concentrations. Controlled clinical trials have demonstrated CHD risk reduction with gemfibrozil, especially in people who have the combination of elevated triglyceride, low HDL-C, and high LDL-C levels. Unlike statins, a reduction in total mortality has not been shown with fibrate therapy.
Fibrates reduce triglyceride levels by 25% to 50% and raise HDL-C concentrations by 10% to 15%. When given to patients with high triglycerides, fibrates sometimes increase LDL-C levels rather than lowering them.
Fibrates may cause myopathy, especially when used in combination with a statin. The most worrisome side effect with gemfibrozil is cholelithiasis (1% incidence). In addition, the anticoagulant effects of warfarin can be accentuated when given with gemfibrozil.
Hypertriglyceridemia
Elevated triglycerides are an independent risk factor for CHD. Whenever triglyceride levels above 150 mg/dL are encountered, a secondary cause should first be sought. Common causes include:
Obesity.
Physical inactivity.
Alcohol intake.
High-carbohydrate diets.
Diabetes.
Renal or liver disease.
Certain drugs (i.e., steroids, estrogens, b-blockers).
If any of these are present, they should be treated or removed and triglycerides reassessed. If levels remain above 200 mg/dL after correction of secondary causes and LDL-C levels are at goal, ATP III recommends the setting of a second treatment goal defined by non-HDL-C levels (see Table 10 ). Non-HDL-C is calculated by subtracting HDL-C from total cholesterol. The secondary non-HDL-C goal is set at 30 mg/dL above the LDL-C goal because VLDL particles normally contain up to 30 mg/dL of cholesterol and any amount above this is excessive.
The first treatment approach to achieve non-HDL-C goals is intensification of the TLC diet with restriction of calories for weight loss in obese patients and increased physical activity. If needed, drug therapy may be initiated. Two approaches may be deployed. First, the LDL-C- lowering drug regimen may be intensified (e.g., the statin dose may be escalated). This will increase removal of LDL and VLDL remnant particles via the upregulated LDL receptor. Second, a fibrate or niacin may be added to the LDL-C lowering regimen. These therapies will either increase triglyceride removal from the lipoprotein (fibrates) or reduce the secretion of VLDL particles from the liver (niacin). Patients given the statin-fibrate combination have an increased risk of myopathy and should be initially evaluated for renal or hepatic dysfunction and potentially interacting drugs, dosed with the lowest effective dose of both drugs and monitored carefully for symptoms of muscle soreness and weakness. If muscle symptoms occur, the patient should be evaluated to rule out an adverse effect from the drug regimen. With careful selection of the patients who receive this regimen and good monitoring, these drugs can be safely used together.
In patients with very high triglyceride levels -- for whom the risk of pancreatitis is a concern -- treatment consists of a very-low-fat diet (i.e., less than 15% of calories from fat), weight control, increased physical activity, and therapy with one or more triglyceride-lowering drug(s) (i.e., fibrate, niacin, and/or fish oils).
Low HDL-C
Low HDL-C is also a strong independent predictor of CHD. Athough previous guidelines set 35 mg/dL as the cutoff point for low HDL-C, ATP III defines this as less than 40 mg/dL. As with high triglyceride levels, the first step in managing a low HDL-C level is to identify and remove (or diminish) secondary causes. These include:
Elevated triglyceride concentrations.
Obesity.
Physical inactivity.
Cigarette smoking.
Very high carbohydrate intakes.
Certain drugs (steroids, b-blockers, progestins).
In patients with low HDL-C concentrations, no clinical studies have addressed whether increases in HDL-C (with little or no change in LDL-C or triglycerides) will lower CHD risk. However, the observational literature describing an increase in CHD risk with low HDL-C levels is voluminous. Based on this, most authorities believe that patients with low HDL-C concentrations should be treated.
The first step in treating low HDL-C levels is to improve life habits, specifically increasing physical activity and losing weight if the patient is overweight or obese. If the patient also has an elevated LDL-C concentration, use of a statin to reduce LDL-C also lowers the risk associated with subnormal HDL-C levels. Most often, a low HDL-C level is found with other risk factors as in patients with the metabolic syndrome, and treatment of the syndrome as described above also addresses the low HDL-C problem.
Likewise, if the low HDL-C is present with a high triglyceride level (which it almost always is), it will improve as treatment is initiated to achieve the non-HDL-C treatment goal.
Finally, for the rare patient who has an isolated low HDL-C (with LDL-C and triglyceride levels in the normal range), medications to raise HDL-C concentration -- such as niacin or a fibrate -- can be considered, especially if the patient has experienced a CHD event or has a CHD equivalent risk.
Conclusion
The Executive Summary and full text of the ATP III report can be found at www.nhlbi.nih.gov/guidelines/cholesterol. This Web site also contains a Palm Pilot program for the global risk assessment and slides of ATP III recommendations that can be downloaded.
The new guidelines have the potential to help millions of people avoid disabling and life-shortening CHD events. This great promise, however, requires that the guidelines be implemented and fully integrated into the care of patients by all health professionals. Based on past experiences with previous guidelines, this is not likely to happen. More than one-half of all patients who are candidates for treatment have yet to be identified. Most patients who are started on lipid-lowering therapy discontinue it within 1 year. Of those who receive treatment, only about one-third reach treatment goals.
We need to do better. The new guidelines provide a great opportunity as well as a great challenge to pharmacists everywhere. The question on the table is this: How will you make the new guidelines available to the patients you serve?
Top
From Journal of the American College of Cardiology
Metabolic Syndrome: Connecting and Reconciling Cardiovascular and Diabetes Worlds
Scott M. Grundy, MD, PhD
Authors and Disclosures
Posted: 03/21/2006; J Am Coll Cardiol. 2006;47(6):1093-1100. © 2006 Elsevier Science, Inc.
Metabolic Syndrome: Connecting and Reconciling Cardiovascular and Diabetes Worlds
Abstract and Introduction
Abstract
The metabolic syndrome is a constellation of risk factors of metabolic origin that are accompanied by increased risk for cardiovascular disease and type 2 diabetes. These risk factors are atherogenic dyslipidemia, elevated blood pressure, elevated plasma glucose, a prothrombotic state, and a proinflammatory state. The two major underlying risk factors for the metabolic syndrome are obesity and insulin resistance; exacerbating factors are physical inactivity, advancing age, and endocrine and genetic factors. The condition is progressive, beginning with borderline risk factors that eventually progress to categorical risk factors. In many patients, the metabolic syndrome culminates in type 2 diabetes, which further increases risk for cardiovascular disease. Primary treatment of the metabolic syndrome is lifestyle therapy—weight loss, increased physical activity, and anti-atherogenic diet. But as the condition progresses, drug therapies directed toward the individual risk factors might be required. Ultimately, it might be possible to develop drugs that will simultaneously modify all of the risk factors. At present such drugs are in development but so far have not reached the level of clinical practice.
Introduction
In 2001, the National Cholesterol Education Program (NCEP) Adult Treatment Panel III (ATP III) introduced the metabolic syndrome as a risk partner to elevated low-density lipoprotein (LDL)-cholesterol in cholesterol guidelines.[1,2] This step was in response to the increasing prevalence of obesity and its metabolic complications in the U.S. The term metabolic syndrome was applied to the clustering of risk factors that often accompany obesity and associate with increased risk for both atherosclerotic cardiovascular disease (ASCVD) and type 2 diabetes. One advantage of identifying this particular cluster of risk factors is that it should bring together the fields of cardiovascular disease and diabetes for a concerted and unified effort to reduce risk for both conditions simultaneously. Moreover, cardiovascular disease is the foremost killer of patients with diabetes, which is of interest to both fields[3].
Risk Factor Clustering and Pathogenesis of the Metabolic Syndrome
The risk factors of the metabolic syndrome are of metabolic origin and consist of atherogenic dyslipidemia, elevated blood pressure, elevated plasma glucose, a prothrombotic state, and a proinflammatory state.[1,2,4-6] Atherogenic dyslipidemia comprises elevations of lipoproteins containing apolipoprotein B, elevated triglycerides, increased small particles of LDL, and low levels of high-density lipoproteins (HDL). Elevated plasma glucose falls in the range of either pre-diabetes or diabetes. A prothrombotic state signifies anomalies in procoagulant factors (i.e., increases in fibrinogen and factor VII), anti-fibrinolytic factors (i.e., increases in plasminogen activator inhibitor-1), platelet aberrations, and endothelial dysfunction. A proinflammatory state is characterized by elevations of circulating cytokines and acute phase reactants (e.g., C-reactive protein).
The pathogenesis of the metabolic syndrome is multifactorial.[1,2,4-6] The major underlying risk factorsare obesity and insulin resistance. Risk associated with obesity is best identified by increased waist circumference (abdominal obesity). Insulin resistance can be secondary to obesity but can have genetic components as well. Several factors further exacerbate the syndrome: physical inactivity, advancing age, endocrine dysfunction, and genetic aberrations affecting individual risk factors. The increasing prevalence of metabolic syndrome in the U.S. and worldwide, however, seems to be driven largely by more obesity exacerbated by sedentary lifestyles.[7]
Evolution of the Metabolic Syndrome Concept and the Name
Our understanding of the metabolic syndrome stems from two types of research. Epidemiological studies establish strong association of obesity with ASCVD[8,9] and type 2 diabetes.[10] Some of the increased risk for cardiovascular disease is due to well-established, obesity induced risk factors, (i.e., plasma cholesterol, elevated blood pressure, and diabetes).[11] These risk factors have been called the metabolic complications of obesity.[12,13] Cardiovascular epidemiologists generally have not referred to this clustering as a syndrome.
The naming of risk factor grouping as syndrome came largely from the diabetes field. For example, Reaven[14,15] coined the term "syndrome X" to signify a constellation of metabolic risk factors associated with insulin resistance. Reaven[14,15] contends that insulin resistance is the dominant underlying risk factor for syndrome X. In accord, others in the diabetes field have applied the name insulin resistance syndrome.[16-19] They have largely viewed obesity as an exacerbating factor but without the same pathophysiological significance of insulin resistance. Among diabetologists, some have used the term metabolic syndrome as a more generic name for the aggregation of metabolic risk factors.[20-22] Regardless of the prefix, the diabetes field deserves much of the credit for introducing the term syndrome to define a grouping of metabolic risk factors. The ATP III guidelines[1,2] followed suit and employed the name metabolic syndrome because it seemed to be widely used to describe risk-factor aggregation.
Clinical Outcomes of the Metabolic Syndrome: Cardiovascular Disease and Type 2
In patients with the metabolic syndrome, relative risk for ASCVD ranges form 1.5 to 3.0 depending on the stage of progression.[23-34] When diabetes is not yet present, risk for progression to type 2 diabetes averages about five-fold increase compared with those without the syndrome.[35-39] Once diabetes develops, cardiovascular risk increases even more.[40,41] The natural history of the metabolic syndrome and its complications are described in Figure 1. Most individuals who develop the syndrome first acquire abdominal obesity without risk factors, but with time, multiple risk factors begin to appear. At the beginning, they usually are only borderline elevated; later and in many individuals they become categorically raised.[42] In some, the syndrome culminates in type 2 diabetes. If ASCVD develops, cardiovascular complications—cardiac arrhythmias, heart failure, and thrombotic episodes—often ensue. Those with diabetes can further acquire a host of complications including renal failure, diabetic cardiomyopathy, and various neuropathies. When ASCVD and diabetes exist concomitantly, risk for subsequent cardiovascular morbidity is very high. Patients with metabolic syndrome can manifest a variety of other conditions that complicate their management: fatty liver, cholesterol gallstones, gout, and sleep apnea. The presence of several or all of these outcomes commonly leads to the use of multiple medications (polypharmacy). No only does polypharmacy carry the risk of adverse drug interactions but it interferes with compliance, and for many patients, imposes a prohibitive cost burden.
Click to zoom
(Enlarge Image)
Figure 1.
Progression and outcomes of the metabolic syndrome. The metabolic syndrome arises largely out of abdominal obesity. With aging and increasing obesity, metabolic risk factors worsen. Many persons with the metabolic syndrome eventually develop type 2 diabetes. As the syndrome advances, risk for cardiovascular disease and its complications increase. Once diabetes develops, diabetic complications other than cardiovascular disease often develop. The metabolic syndrome encompasses each stage in the development of risk factors and type 2 diabetes.
The Conundrum Over Clinical Diagnosis of the Metabolic Syndrome
In 1998, a diabetes working group of the World Health Organization (WHO) proposed a set of criteria for a clinical diagnosis of the metabolic syndrome.[20] These included clinical evidence of insulin resistance, such as impaired glucose tolerance, impaired fasting glucose, or type 2 diabetes, as necessary for the diagnosis. Two other risk factors were also needed: elevated triglycerides or low HDL, elevated blood pressure, obesity, or microalbuminuria. Shortly afterward, the European Group for Study of Insulin Resistance (EGIR) proposed similar criteria for the insulin resistance syndrome [18].
The ATP III[1,2] simplified the WHO criteria[18] by requiring three of five simple clinical measures: increased waist circumference (abdominal obesity), elevated triglycerides, reduced HDL cholesterol, elevated blood pressure, and elevated glucose. Abdominal obesity was not made a requirement because some persons with insulin resistance can have multiple metabolic abnormalities without overt abdominal obesity. The American Heart Association and National Heart, Lung, and Blood Institute recently reaffirmed the utility of ATP III criteria, with minor modifications[4,5] ( Table 1 ). Simultaneously the International Diabetes Federation (IDF)[43] replaced WHO criteria with those closer to ATP III. Waist circumference thresholds were made ethnic-specific, and abdominal obesity was required for diagnosis. The latter simplifies diagnosis in developing countries to save resources; only individuals exceeding waist thresholds will require laboratory measurements to finalize the diagnosis. Thus at last, the ATP III update[4,5] and the IDF report[6] largely harmonize the clinical diagnosis of the syndrome.
ASCVD Risk in Metabolic Syndrome is Greater than the Sum of its Measured Risk Factors
The question has been raised as to whether the risk for ASCVD associated with the metabolic syndrome is greater than the sum of its risk factors.[44] The answer is the affirmative. First, epidemiological studies strongly suggest that multiple risk factors raise risk more than the sum of accompanying single risk factors;[45-48] risk rises geometrically instead of linearly. This phenomenon is called multiplicative risk. Second, several metabolic risk factors are not included in standard risk algorithms; but all of them seemingly impart independent risk for cardiovascular events. These are a prothrombotic state,[49-51] a proinflammatory state,[52,53] and elevated triglyceride.[54,55] This additional risk exceeds that which can be explained by standard risk factors. Third, some of the risk attributed to established risk factors (e.g., hypertension and low HDL) probably can be accounted for by unmeasured risk factors. For example, blood pressure-lowering with drugs fails to reduce risk as much as predicted from epidemiological studies;[56] a portion of the epidemiological risk attributed to hypertension likely is subsumed by unmeasured risk factors. Likewise, the robustness of low HDL to predict ASCVD risk almost certainly is due in part to the fact that it is a marker for other metabolic risk factors.[57,58] And fourth, because metabolic syndrome often progresses and culminates in type 2 diabetes, the syndrome's long-term risk is underestimated at any one time. Thus several lines of evidence indicate that the risk accompanying the metabolic syndrome is greater than the sum of its measured components.
Diabetologist Discontent with Naming of the Metabolic Syndrome
The cardiovascular community generally has embraced the concept of risk-factor clustering as a syndrome, even though it originated in the diabetes field. Moreover, cardiovascular investigators have been enthusiastic about the metabolic syndrome because it accords well with the multiple-risk-factor paradigm that is widely adopted for risk management. Conversely, the name metabolic syndrome poses problems for some investigators in diabetes. The reasons can be summarized briefly (Fig. 2).
Click to zoom
(Enlarge Image)
Figure 2.
Interrelations and overlap of metabolic syndrome with insulin resistance, prediabetes, and type 2 diabetes. According to the insulin resistance hypothesis, the metabolic syndrome is caused predominantly by insulin resistance. The latter also contributes to prediabetes and, ultimately, to type 2 diabetes. About 75% of people with prediabetes and 86% of people with type 2 diabetes have the metabolic syndrome. Both metabolic syndrome and type 2 diabetes are known to predict cardiovascular disease.
First, a group of researchers believes that insulin resistance is the dominant cause of the syndrome.[14-19,59] These investigators prefer the term insulin resistance syndrome. The name metabolic syndrome leaves open a multifactorial causation, countering one view of the essential pathogenesis. According to the insulin-resistance hypothesis, even obesity elicits the metabolic risk factors through insulin resistance.
Second, the term prediabetes, which encompasses impaired fasting glucose and impaired glucose tolerance, is meant to identify an elevated risk for type 2 diabetes.[60] Yet approximately 70% to 75% of individuals with prediabetes meet clinical criteria for the metabolic syndrome.[61,62] According to some investigators,[63-65] prediabetes carries a predictive power for ASCVD similar to that of metabolic syndrome.[63-65] But this predictive potential most likely can be explained by accompanying metabolic risk factors.[66] Consequently, the overlap between prediabetes and metabolic syndrome creates a tension for nomenclature within the diabetes world.
Third, both ATP III and IDF criteria[4-6] allow for a diagnosis of metabolic syndrome to be applied to patients with type 2 diabetes who manifest a clustering of risk factors characteristic of the syndrome. The ATP III indeed defines diabetes itself as a high-risk condition for ASCVD. This high risk is due largely to associated risk factors. For example, Alexander et al.[29] reported that the metabolic syndrome, as defined by ATP III, accounts for most of the increased risk for congenital heart disease accompanying type 2 diabetes. Moreover, about 86% of persons over age 50 years living in the U.S. and who have type 2 diabetes will qualify for a diagnosis of metabolic syndrome.[29] Therefore, it is not surprising that the overlap between metabolic syndrome with categorical hyperglycemia and type 2 diabetes poses significant identity issues for the diabetes community. It is not entirely clear whether type 2 diabetes as a concept is strictly hyperglycemia caused by concomitant insulin resistance and decreased insulin secretion[67,68] or whether it should include the metabolic syndrome as one of its components.[29]
Regarding type 2 diabetes, the conflict in nomenclature and definitions has important clinical implications. Cardiovascular risk factors in most patients with type 2 diabetes deserve greater clinical attention than they currently receive. Intensive management including drug treatment usually is required for elevated cholesterol and blood pressure, not to mention hyperglycemia; furthermore, low-dose aspirin typically is recommended for most patients with type 2 diabetes to reduce a prothrombotic state.[69] Unfortunately, many physicians who treat patients with type 2 diabetes have failed to recognize the necessity to substantially lower cholesterol and blood-pressure levels and to add aspirin prophylaxis. Clinical trials clearly document benefit of intensive reduction of non-glucose risk factors—cholesterol[70-73] and blood pressure [74,75]—in patients with type 2 diabetes. This need is strongly stated in cholesterol and blood pressure guidelines.[1,2,75] For this reason, it behooves diabetes agencies as well as the cardiovascular field to take an aggressive approach to management of all cardiovascular risk factors in patients with type 2 diabetes who have features of the metabolic syndrome.
Diabetologist Discontent with Naming of the Metabolic Syndrome
The cardiovascular community generally has embraced the concept of risk-factor clustering as a syndrome, even though it originated in the diabetes field. Moreover, cardiovascular investigators have been enthusiastic about the metabolic syndrome because it accords well with the multiple-risk-factor paradigm that is widely adopted for risk management. Conversely, the name metabolic syndrome poses problems for some investigators in diabetes. The reasons can be summarized briefly (Fig. 2).
Click to zoom
(Enlarge Image)
Figure 2.
Interrelations and overlap of metabolic syndrome with insulin resistance, prediabetes, and type 2 diabetes. According to the insulin resistance hypothesis, the metabolic syndrome is caused predominantly by insulin resistance. The latter also contributes to prediabetes and, ultimately, to type 2 diabetes. About 75% of people with prediabetes and 86% of people with type 2 diabetes have the metabolic syndrome. Both metabolic syndrome and type 2 diabetes are known to predict cardiovascular disease.
First, a group of researchers believes that insulin resistance is the dominant cause of the syndrome.[14-19,59] These investigators prefer the term insulin resistance syndrome. The name metabolic syndrome leaves open a multifactorial causation, countering one view of the essential pathogenesis. According to the insulin-resistance hypothesis, even obesity elicits the metabolic risk factors through insulin resistance.
Second, the term prediabetes, which encompasses impaired fasting glucose and impaired glucose tolerance, is meant to identify an elevated risk for type 2 diabetes.[60] Yet approximately 70% to 75% of individuals with prediabetes meet clinical criteria for the metabolic syndrome.[61,62] According to some investigators,[63-65] prediabetes carries a predictive power for ASCVD similar to that of metabolic syndrome.[63-65] But this predictive potential most likely can be explained by accompanying metabolic risk factors.[66] Consequently, the overlap between prediabetes and metabolic syndrome creates a tension for nomenclature within the diabetes world.
Third, both ATP III and IDF criteria[4-6] allow for a diagnosis of metabolic syndrome to be applied to patients with type 2 diabetes who manifest a clustering of risk factors characteristic of the syndrome. The ATP III indeed defines diabetes itself as a high-risk condition for ASCVD. This high risk is due largely to associated risk factors. For example, Alexander et al.[29] reported that the metabolic syndrome, as defined by ATP III, accounts for most of the increased risk for congenital heart disease accompanying type 2 diabetes. Moreover, about 86% of persons over age 50 years living in the U.S. and who have type 2 diabetes will qualify for a diagnosis of metabolic syndrome.[29] Therefore, it is not surprising that the overlap between metabolic syndrome with categorical hyperglycemia and type 2 diabetes poses significant identity issues for the diabetes community. It is not entirely clear whether type 2 diabetes as a concept is strictly hyperglycemia caused by concomitant insulin resistance and decreased insulin secretion[67,68] or whether it should include the metabolic syndrome as one of its components.[29]
Regarding type 2 diabetes, the conflict in nomenclature and definitions has important clinical implications. Cardiovascular risk factors in most patients with type 2 diabetes deserve greater clinical attention than they currently receive. Intensive management including drug treatment usually is required for elevated cholesterol and blood pressure, not to mention hyperglycemia; furthermore, low-dose aspirin typically is recommended for most patients with type 2 diabetes to reduce a prothrombotic state.[69] Unfortunately, many physicians who treat patients with type 2 diabetes have failed to recognize the necessity to substantially lower cholesterol and blood-pressure levels and to add aspirin prophylaxis. Clinical trials clearly document benefit of intensive reduction of non-glucose risk factors—cholesterol[70-73] and blood pressure [74,75]—in patients with type 2 diabetes. This need is strongly stated in cholesterol and blood pressure guidelines.[1,2,75] For this reason, it behooves diabetes agencies as well as the cardiovascular field to take an aggressive approach to management of all cardiovascular risk factors in patients with type 2 diabetes who have features of the metabolic syndrome.
The Metabolic Syndrome is Not a Reliable Risk Assessment Tool for Short-Term Risk
The metabolic syndrome carries increased long-term risk both for ASCVD and diabetes as well as higher short-term risk. The ATP III[1,2] introduced the syndrome primarily to augment the clinical management of obese persons who have progressed to the stage of multiple risk factors (Fig. 1). Importantly, the metabolic syndrome is not a reliable tool for global risk assessment for ASCVD in the short term (e.g., 10-year risk). It does not include all of the risk factors contained in standard risk-prediction algorithms (e.g., age, gender, total cholesterol, smoking status). Thus, 10-year risk assessment is best carried out with algorithms such as Framingham risk scoring.[1,2] Even so individuals with the metabolic syndrome live on a higher trajectory of long-term risk for both ASCVD and type 2 diabetes. Consequently, the progressive nature of the syndrome should be recognized (Fig. 1).
But even risk algorithms based on established risk factors are limited in predictive power for individuals. More effective prediction tools are needed. One promising technique is identification of atherosclerotic burden through non-invasive imaging.[76,77] The finding of significant atherosclerotic burden in patients who otherwise would not be identified as being at high risk could trigger more intensive interventions such as cholesterol-lowering drugs and low-dose aspirin. Patients with the metabolic syndrome might be particularly good candidates for atherosclerosis imaging. To date, however, the potential of this strategy has not been fully developed.
All patients with the metabolic syndrome deserve global risk assessment, whether by risk-factor algorithms or by atherosclerosis imaging; its essential purpose is to identify candidates for drug therapies for prevention. But once a person is found to have the syndrome, lifestyle therapies should be introduced, reinforced, and monitored. Drug therapy is a secondary consideration that should be guided by global risk assessment.
Lifestyle Modification is the Primary Therapy of the Metabolic Syndrome
The ATP III[1,2] embedded the metabolic syndrome into cholesterol guidelines to reinforce clinical lifestyle therapies. These therapies consist of weight reduction, increased physical activity, and an anti-atherogenic diet; smoking cessation in addition is mandatory. Lifestyle intervention unfortunately is often neglected in routine practice. It has the potential to reduce the severity of all metabolic risk factors at every stage of progression as well as to slow their progression[8] (Fig. 1). Drug therapies of established risk factors alone are not sufficient to completely reverse risk associated with the syndrome (i.e., risk for either ASCVD or diabetes). Clinical trials consistently show a substantial residue of risk that cannot be reversed with drugs.[56,72] Lifestyle modifications are one way to cut into this residual risk. In addition, institution of lifestyle therapies early in the syndrome can delay risk-factor progress and the need for drug therapies. Beyond reducing risk for cardiovascular disease, weight reduction and increased physical activity slows progression to type 2 diabetes in individuals with the metabolic syndrome.[78,79] Thus the combined effect of lifestyle therapies to reduce cardiovascular risk factors and emergence of diabetes doubly validates the primacy of lifestyle intervention for this syndrome.
Has the Pharmaceutical Industry Usurped the Metabolic Syndrome?
When ATP III guidelines were crafted to include the metabolic syndrome, the pharmaceutical industry recognized it as a potential target of drug therapy. The idea of reducing multiple risk factors with a single drug or a drug combination obviously is attractive and needed. It is curious that one criticism leveled against the metabolic-syndrome concept is that the pharmaceutical industry has tried to take advantage of it to promote or develop new drugs. New drug development need not detract from the priority given to lifestyle modification. Moreover, the challenge for developing a new drug that will substantially reduce multiple risk factors is formidable. Some in industry might have hoped that the scientific community would agree on a single criterion for the syndrome and, if so, that regulatory agencies would accept this criterion so that a new drug could be registered for the metabolic syndrome. This hope is unrealistic, not because of the lack of a single criterion, but because regulatory agencies are unlikely to allow registration for new targets in the cardiovascular field without clinical end-point trials.
At present, the only drugs approved for treatment of risk factors are those that target the individual risk factors: lipid-lowering drugs, antihypertensive agents, hypoglycemic drugs, anti-platelet drugs, and weight-loss agents. For the use of these drugs in persons with the metabolic syndrome, a physician should follow current treatment guidelines of the NCEP,[1,2] the Sixth Joint National Commission for blood pressure treatment,[75] the American Diabetes Association,[69,80] the American Heart Association/American College of Cardiology,[81,82] and the National Institutes of Health Obesity Initiative.[8] Pharmacological therapies for the two underlying risk factors for the syndrome—obesity and insulin resistance—are under development, albeit in the early stages. They nonetheless hold promise for adding benefit for delaying progression of the condition. Candidate drugs for treatment of the metabolic syndrome as a whole and to reduce risk for ASCVD and/or diabetes are weight-reduction drugs, peroxisome proliferator-activated receptor (PPAR)-alpha agonists (fibrates), PPAR-gamma agonists (thiazolidinediones [TZDs]), and dual PPAR agonists.
Two weight-loss drugs—sibutramine and orlistat—are already approved by the Food and Drug Administration. These improve all of the metabolic syndrome risk factors but produce only a moderate weight loss.[83,84] A new and promising weight-loss drug is a selective cannabinoid receptor-1 (CB1) antagonist called rimonabant. Endocannabinoids, which activate G-protein-coupled CB1 in hypothalamus and limbic forebrain, accentuate hyperphagia.[85] Rimonabant suppresses endogenous activation of the endocannabinoid system.[86] The drug causes a 5% to 10% weight loss up to two years[87] and might have systemic actions that independently reduce risk factors for the metabolic syndrome.[88,89]
Clinical trials suggest that fibrates will independently reduce risk for ASCVD through treatment of atherogenic dyslipidemia, possibly because of their anti-inflammatory properties.[2] The TZDs lessen insulin resistance and modestly improve the various metabolic risk factors. A recent clinical trial found a strong trend toward decreasing cardiovascular outcomes with one TZD, pioglitazone.[90] Dual PPAR agonists combine PPAR-alpha and PPAR-gamma agonism in a single agent and thus have favorable effects on several metabolic risk factors.[91,92] In spite of promise, all of these drugs have outcome hurdles to mount before they can be approved for routine use in patients with the metabolic syndrome.
Conclusions
The metabolic syndrome consists of a clustering of risk factors of metabolic origin that together are associated with higher risk for ASCVD and diabetes. The syndrome occurs in approximately one-fourth of American adults. It is accompanied by insulin resistance but its increasing prevalence is due largely to escalating obesity. Simple clinical criteria are available to identify persons most likely to have the syndrome. These individuals typically have several metabolic risk factors that are not measured in clinical practice; thus the syndrome as a whole conveys a greater risk for ASCVD and diabetes than revealed by usual clinical measures. Moreover, the syndrome is a progressive condition that worsens with advancing age and increasing obesity. It often culminates in type 2 diabetes, which carries a particularly high risk for both cardiovascular events and other complications. The metabolic syndrome itself is not a robust risk assessment tool for estimating absolute 10-year risk; but its presence calls for more extensive short-term risk assessment, either by risk-factor scoring or imaging for subclinical atherosclerosis. The primary intervention is lifestyle therapy, particularly weight reduction and increased exercise. Lifestyle therapies will dampen the syndrome and slow its progression at every stage but particularly in its early phases. Drug therapies should be based on global risk assessment and should follow current treatment guidelines for each of the risk factors. But new drugs under development promise to better treat the syndrome as a whole. The metabolic syndrome should serve to bring cardiovascular and diabetes fields together in a joint effort to reduce both ASCVD and diabetes. At present this joint action is being hampered by the issue of how to integrate the metabolic syndrome into concepts of insulin resistance, prediabetes, and type 2 diabetes, all of which are important to the diabetes field. Nonetheless, the common clustering of metabolic risk factors in obese persons is a fact of American medicine; and it deserves increased attention for clinical management of affected patients.