id
stringlengths 1
5
| prompt
listlengths 1
1
| completion
listlengths 1
1
| Label
stringclasses 3
values |
|---|---|---|---|
109
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Liraglutide, a once-daily human GLP-1 analogue, added to a sulphonylurea over 26 weeks produces greater improvements in glycaemic and weight control compared with adding rosiglitazone or placebo in subjects with Type 2 diabetes (LEAD-1 SU)\n\n ABSTRACT.AIM:\nTo compare the effects of combining liraglutide (0.6, 1.2 or 1.8 mg/day) or rosiglitazone 4 mg/day (all n ≥ 228) or placebo (n = 114) with glimepiride (2–4 mg/day) on glycaemic control, body weight and safety in Type 2 diabetes.\n\nABSTRACT.METHODS:\nIn total, 1041 adults (mean ± sd), age 56 ± 10 years, weight 82 ± 17 kg and glycated haemoglobin (HbA1c) 8.4 ± 1.0% at 116 sites in 21 countries were stratified based on previous oral glucose-lowering mono : combination therapies (30 : 70%) to participate in a five-arm, 26-week, double-dummy, randomized study.\n\nABSTRACT.RESULTS:\nLiraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%) or rosiglitazone (−0.4%, P < 0.0001, baseline 8.4%) when added to glimepiride. Liraglutide 0.6 mg was less effective (−0.6%, baseline 8.4%). Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l). Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). Changes in body weight with liraglutide 1.8 mg (−0.2 kg, baseline 83.0 kg), 1.2 mg (+0.3 kg, baseline 80.0 kg) or placebo (−0.1 kg, baseline 81.9 kg) were less than with rosiglitazone (+2.1 kg, P < 0.0001, baseline 80.6 kg). Main adverse events for all treatments were minor hypoglycaemia (< 10%), nausea (< 11%), vomiting (< 5%) and diarrhoea (< 8%).\n\nABSTRACT.CONCLUSIONS:\nLiraglutide added to glimepiride was well tolerated and provided improved glycaemic control and favourable weight profile.\n\nBODY.INTRODUCTION:\nMost drugs that target Type 2 diabetes (T2D) also cause weight gain or hypoglycaemia, or both, with the risk increasing with combination therapy. Glucagon-like peptide-1 (GLP-1)-based therapies stimulate insulin secretion and reduce glucagon secretion only during hyperglycaemia. GLP-1 also slows gastric emptying and reduces appetite [1]. Although American Diabetes Association (ADA)/European Association for the Study of Diabetes (EASD) guidelines recommend lifestyle and metformin as initial therapy for T2D [2], sulphonylureas are used widely, particularly when metformin or thiazolidinediones are not tolerated. Glycaemic control eventually deteriorates with sulphonylureas while hypoglycaemia and weight gain are common [3]. Incretin therapy improves glycaemic control with low hypoglycaemic risk, while delayed gastric emptying and reduced appetite can reduce weight [1,4]. Liraglutide is a once-daily human GLP-1 analogue with 97% linear amino-acid sequence homology to human GLP-1 [5] and half-life of 13 h after subcutaneous administration that produces 24-h blood glucose control [6]. Liraglutide monotherapy for 14 weeks reduced glycated haemoglobin (HbA1c) by 1.7% and fasting plasma glucose (FPG) by 3.4 mmol/l without causing hypoglycaemia, along with weight loss (∼3 kg) compared with placebo [7]. Improvements in pancreatic B-cell function [7–9] and blood pressure [7], along with decreased glucagon secretion [7,10], also occurred. As part of the phase 3 programme [the Liraglutide Effect and Action in Diabetes (LEAD) programme] with liraglutide in > 4000 subjects with T2D as monotherapy or in combination therapy, this 26-week trial examined liraglutide plus glimepiride compared with either placebo or rosiglitazone added to glimepiride on glycaemic control and body weight.\n\nBODY.SUBJECTS AND METHODS.STUDY PARTICIPANTS:\nInclusion criteria: T2D treated with oral glucose-lowering agents (OGLAs) for ≥ 3 months; 18–80 years of age; HbA1c 7.0–11.0% (previous OGLA monotherapy) or 7.0–10.0% (previous OGLA combination therapy); body mass index (BMI) ≤ 45.0 kg/m2. Exclusion criteria: used insulin within 3 months, impaired liver or renal function, uncontrolled hypertension (≥ 180/100 mmHg), cancer or used any drugs apart from OGLAs likely to affect glucose concentrations. Subjects provided written informed consent. The study was conducted in accordance with good clinical practice guidelines and approved by independent ethics committees.\n\nBODY.SUBJECTS AND METHODS.STUDY DESIGN:\nThe study was a 26-week, double-blind, double-dummy, randomized, active-control, five-armed parallel (116 sites in 21 countries, primarily Europe and Asia) trial enrolling 1041 subjects (1–37 subjects per centre), all receiving glimepiride (2–4 mg/day) in combination with (Fig. 1): FIGURE 1Overview of trial design and treatment arms. one of three liraglutide doses [0.6, 1.2 or 1.8 mg, injected subcutaneously (Novo Nordisk, Bagsvaerd, Denmark) and rosiglitazone placebo];liraglutide placebo and rosiglitazone placebo;liraglutide placebo and rosiglitazone 4 mg/day (rosiglitazone; AvandiaTM; GlaxoSmithKline, London, UK). The doses of rosiglitazone and glimepiride used were determined by the highest doses approved in all participating counties. After discontinuing previous OGLAs except glimepiride, separate 2-week titration and maintenance periods with glimepiride (open-label) preceded randomization (Fig. 1). Subjects were stratified according to previous treatment (monotherapy or combination therapy). After randomization, 2-week treatment titration and 24-week treatment (maintenance) phases (Fig. 1) were completed. Liraglutide was up-titrated weekly in 0.6-mg increments until allocated doses were reached. Glimepiride could be adjusted between 2 and 4 mg/day in case of hypoglycaemia or other adverse events (AEs), while other drug doses were fixed. Liraglutide (active and placebo) was supplied in 3-ml pre-filled pens with 31G needles (Novo Nordisk). Subjects were encouraged to inject liraglutide into the upper arm, thigh or abdomen at the same time each day. Rosiglitazone and glimepiride were taken in the morning or with the first meal.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.EFFICACY:\nThe primary endpoint was change from baseline HbA1c after 26 weeks of treatment. Secondary endpoints included: percentages of subjects reaching HbA1c (< 7.0%, ≤ 6.5%), FPG (5.0 to ≤ 7.2 mmol/l) and postprandial plasma glucose (PPG; 10.0 mmol/l) targets [11–13]; changes in body weight, FPG, mean PPG, indices of pancreatic B-cell function [pro-insulin : insulin ratio and homeostasis model assessment (HOMA)-B], HOMA-insulin resistance (HOMA-IR) and blood pressure (BP). HbA1c was measured centrally (MDS Pharma Services, King of Prussia, PA, USA) by high performance liquid chromatography while plasma glucose (PG) was self-measured using MediSense® glucose meters (Abbott Diagnostics Inc., Abbott Park, IL, USA). Insulin and C-peptide were measured by chemiluminescence, proinsulin by ELISA, while glucagon was measured in aprotinin-treated plasma by radioimmunoassay. The proinsulin : insulin ratio was calculated from fasting insulin and fasting proinsulin. HOMA-B and HOMA-IR were both calculated from FPG and fasting insulin. Samples measured centrally were collected and transported according to detailed procedures in the MDS Pharma Services manual. Samples stored at ambient temperature were shipped by courier to the central laboratory on the same day as collection, while frozen samples were shipped every 3 weeks.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.SAFETY:\nSafety variables included hypoglycaemic episodes based on PG levels (< 3.1 mmol/l), liraglutide antibodies including cross-reacting and neutralizing antibodies, tolerability (gastrointestinal complaints) and pulse. AEs, vital signs, electrocardiogram (ECG), biochemical and haematology measures including calcitonin were also monitored. Self-treated hypoglycaemic episodes were classified as minor, while those requiring third-party assistance were considered major. Serum antibodies against liraglutide were measured by radioimmunoprecipitation assay.\n\nBODY.SUBJECTS AND METHODS.STATISTICAL ANALYSES:\nAll efficacy and safety analyses were based on intent-to-treat criteria, defined as subjects who were exposed to ≥ 1 dose of trial product(s). Efficacy endpoints were analysed by ancova with treatment, country and previous glucose-lowering treatment as fixed effects and baseline values as covariates. Missing data were imputed by last observation carried forward (LOCF). Sample size calculations were based on predicted HbA1c and body weight after trial completion. As the three liraglutide + glimepiride groups were to be compared with both rosiglitazone + glimepiride and glimepiride monotherapy, two calculations were performed. These sample size calculations assumed a standard deviation of 1.2% of HbA1c, the non-inferiority/superiority margin vs. active control was set to 0.4% and the difference to detect (superiority vs. placebo) was set to 0.5%. For body weight, a coefficient of variation of 3% (based on phase 2a trials for liraglutide) and a difference to detect of 3% were assumed. A combined power (calculated as the product of the marginal powers for HbA1c and body weight) of at least 85% was required. These calculations indicated that at least 168 and 81 patients completing the study would be needed for the combination and glimepiride monotherapy groups, respectively. Assuming a drop-out rate of 25%, targets for randomization were 228 in each of the combination therapy groups and 114 in the placebo group (total n = 1026). To protect against Type 1 errors, HbA1c was analysed using hierarchical testing for descending doses of liraglutide. First, superiority of liraglutide 1.8 mg to placebo was tested and, only if superior to placebo, non-inferiority to rosiglitazone was tested. If non-inferiority was obtained, superiority to rosiglitazone for liraglutide 1.8 mg was tested and superiority to placebo for liraglutide 1.2 mg was tested. If superiority was confirmed, non-inferiority to rosiglitazone would be tested and so on (i.e. testing sequence was stopped when hypotheses could not be rejected). Superiority was concluded when upper limits of two-sided 95% confidence intervals (CIs) for treatment differences were below 0%; non-inferiority was concluded if these values were < 0.4%; for secondary endpoints, Type 1 errors were controlled by estimating simultaneous CIs using Dunnett's method. Proportions of subjects achieving HbA1c (HbA1c < 7.0%, and ≤ 6.5%) and FPG (5.0 ≤ FPG ≤ 7.2 mmol/l) targets [13] were compared between treatments using logistic regression with allocated treatment and baseline values as covariates. Chi-square analyses assessed differences in treatments for percentages of subjects achieving no, one, two or three PPG values < 10 mmol/l [13]. Hypoglycaemic episodes were analysed under the assumption that number per subject were negatively binomially distributed using a generalized linear model, including treatment and country as fixed effects. Other safety data were compared by descriptive statistics. Values for descriptive statistics are expressed as means ± sd, while ancova results are expressed as least square means ± SEM or with 95% CI unless otherwise noted. Significance levels were set to 5% for two-sided tests and 2.5% for one-sided tests.\n\nBODY.RESULTS.DISPOSITION AND DEMOGRAPHICS:\nThe treatment groups were well balanced (Table 1). Of 1712 subjects screened, 1041 were randomized and 1040 were exposed to trial drugs; 147 subjects (14.1%) withdrew (Fig. 2). Withdrawals were higher with placebo (27%) and rosiglitazone treatment (16%) compared with liraglutide 0.6 mg (11%), liraglutide 1.2 mg (14%) and liraglutide 1.8 mg (9%) treatment. Thirty-eight subjects (3.7%) withdrew as a result of AEs (Fig. 2). Table 1 Demographic characteristics of study participants Liraglutide 0.6 mg ( n = 233) Liraglutide 1.2 mg ( n = 228) Liraglutide 1.8 mg ( n = 234) Placebo ( n = 114) Rosiglitazone ( n = 232) Male : female (%) 54 : 46 45 : 55 53 : 47 47 : 53 47 : 53 Age (years) 55.7 ± 9.9 57.7 ± 9.0 55.6 ± 10.0 54.7 ± 10.0 56.0 ± 9.8 Duration of diabetes (years) 6.5 (4.0,10.2) 6.7 (4.0,10.7) 6.5 (3.7,10.5) 6.5 (4.5,10.6) 6.6 (4.3,10.7) Previous on mono : combi (%) 30 : 70 31 : 69 27 : 73 32 : 68 32 : 68 FPG (mmol/l) 10.0 ± 2.4 9.8 ± 2.7 9.7 ± 2.4 9.5 ± 2.0 9.9 ± 2.5 HbA 1c (%) 8.4 ± 1.0 8.5 ± 1.1 8.5 ± 0.9 8.4 ± 1.0 8.4 ± 1.0 Diabetic retinopathy (%) 17.2 14.9 12.0 13.2 16.4 Hypertension (%) 69.1 68.0 69.7 64.9 66.8 BMI (kg/m 2 ) 30.0 ± 5.0 29.8 ± 5.1 30.0 ± 5.1 30.3 ± 5.4 29.4 ± 4.8 Weight (kg) 82.6 ± 17.7 80.0 ± 17.1 83.0 ± 18.1 81.9 ± 17.1 80.6 ± 17.0 Systolic blood pressure (mmHg) 131 ± 16 133 ± 15 132 ± 16 131 ± 15.3 133 ± 15 Data are mean ± sd and percentages, except for duration of diabetes, where data are median, 25th and 75th percentile. BMI, body mass index; FPG, fasting plasma glucose; HbA 1c , glycated haemoglobin; mono : combi, previous treatment with either monotherapy or combination therapy; sd , standard deviation. FIGURE 2Flow of patients through the study.\n\nBODY.RESULTS.EFFICACY.HBA:\nHbA1c decreased rapidly with all doses of liraglutide when added to glimepiride compared with either rosiglitazone or placebo (i.e. glimepiride monotherapy), irrespective of previous therapy. The greatest decreases occurred with liraglutide 1.2 and 1.8 mg (Fig. 3a–c). After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). Liraglutide 0.6 mg was non-inferior to rosiglitazone. Rosiglitazone also was superior to placebo (P < 0.0001). FIGURE 3Mean glycated haemoglobin (HbA1c) by treatment and week (intent-to-treat population with last observation carried forward): (a) overall population; (b) previously on monotherapy; or (c) previously on combination therapy; (d) mean changes in HbA1c from baseline after 26 weeks of treatment. Keys: (a–c) liraglutide 0.6 mg: grey dotted line with squares; liraglutide 1.2 mg: black solid line with triangles; liraglutide 1.8 mg: black dotted line with squares; rosiglitazone: grey solid line with circles; placebo: black solid line with circles. (d) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. ****P < 0.0001 compared with placebo; ††††P < 0.0001 compared with rosiglitazone. HbA1c decreases were greater for subjects who entered from monotherapy compared with combination therapy (Fig. 3d). However, because the increase with placebo was higher for individuals entering on combination therapy (0.7 vs. 0.23%), the differences between treatment groups in favour of liraglutide were similar irrespective of whether subjects were treated previously with monotherapy or combination therapy. Neither age, gender nor BMI affected these trends.\n\nBODY.RESULTS.EFFICACY.PERCENTAGE REACHING AN HBA:\nThe percentage of subjects reaching ADA [2] and International Diabetes Federation (IDF)/American Association of Clinical Endocrinologists (AACE) [11,12] treatment HbA1c goals with liraglutide was dose dependent (Fig. 4). At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). FIGURE 4Subjects achieving specified glycated haemoglobin (HbA1c) levels: (a) percentage reaching HbA1c < 7.0% (American Diabetes Association/European Association for the Study of Diabetes target); (b) percentage reaching HbA1c < 6.5% (International Diabetes Federation/American Association of Clinical Endocrinologists targets); (c) cumulative distribution of HbA1c at 26 weeks for the intent-to-treat (ITT) population; and (d) for the ITT last observation carried forward (LOCF) population. Keys: (a, b) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. (c, d) liraglutide 0.6 mg: pale grey solid line; liraglutide 1.2 mg: grey solid line, liraglutide 1.8 mg: black solid line; rosiglitazone: dotted black line; placebo: dotted grey line; baseline visit: long dashed black line. ****P < 0.0001 or **P < 0.01 compared with placebo; ††††P < 0.0001 or †††P = 0.0005 compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.FASTING PLASMA GLUCOSE:\nBy week 2, subjects treated with liraglutide had rapid and larger decreases in FPG vs. comparator treatment. At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001), while only liraglutide 1.2 or 1.8 mg produced greater reductions than rosiglitazone. FPG treatment differences to placebo were 1.7 mmol/l for liraglutide 0.6 mg and 2.6 mmol/l for both liraglutide 1.2 and 1.8 mg. An 0.7-mmol/l greater reduction in FPG was achieved with either liraglutide 1.2 or 1.8 mg compared with rosiglitazone (P ≤ 0.006) after 26 weeks. FIGURE 5Mean changes from baseline in fasting plasma glucose after 26 weeks of treatment. ****P < 0.0001 compared with placebo; ††P < 0.01 compared with rosiglitazone. The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). The liraglutide 1.2 and 1.8 mg treatment groups also had more subjects achieving the same FPG target at end of treatment compared with rosiglitazone (26%) (P = 0.007 and P = 0.01, respectively).\n\nBODY.RESULTS.EFFICACY.POSTPRANDIAL PLASMA GLUCOSE:\nPPG was reduced similarly after each meal. The greatest reductions in mean PPG values from baseline (average of values obtained 90 min after breakfast, lunch and evening meal) occurred with liraglutide 1.2 mg (2.5 mmol/l) and liraglutide 1.8 mg (2.7 mmol/l). By comparison, the reduction from baseline in mean PPG values was 1.8 mmol/l for rosiglitazone and liraglutide 0.6 mg and 0.4 mmol/l for placebo. Treatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001) and greater with liraglutide 1.2 mg (0.64 mmol/l; P = 0.043) and 1.8 mg (0.87 mmol/l;P = 0.0022) compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.PPG MEASUREMENTS < 10.0 MMOL/L:\nThe percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.\n\nBODY.RESULTS.BODY WEIGHT:\nMean weight at baseline was 81.6 kg. Mean reductions in weight from baseline to end of treatment were 0.2 kg with liraglutide 1.8 mg and 0.1 kg with placebo treatment, while increases occurred with either liraglutide 0.6 mg (0.7 kg), liraglutide 1.2 mg (0.3 kg) or rosiglitazone (2.1 kg) (Fig. 6). Unlike rosiglitazone, weight did not increase substantially with liraglutide and the differences between rosiglitazone and liraglutide were statistically significant (−2.3 to −1.4 kg; P < 0.0001), although there were no significant differences compared with placebo. Gender appeared to have no influence on the results, as indicated when added as a fixed effect in the ancova model. FIGURE 6Mean changes in body weight from baseline after 26 weeks of treatment. *P < 0.05 compared with placebo; ††††P < 0.0001 compared with rosiglitazone.\n\nBODY.RESULTS.INDICES OF PANCREATIC B-CELL FUNCTION AND INSULIN RESISTANCE:\nReductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051). There were no significant differences between treatments for HOMA-IR. Table 2 Selected indices of pancreatic B-cell function Variable Treatment Baseline Week 26 (LOCF) Least square difference from placebo (95% CI) Least square difference from rosiglitazone (95% CI) Proinsulin : insulin ratio Liraglutide 0.6 mg 0.42 ± 0.22 0.38 ± 0.24 −0.05 (−0.11; 0.00) −0.02 (−0.06; 0.03) Liraglutide 1.2 mg 0.45 ± 0.31 0.33 ± 0.20 −0.10 (−0.16; −0.05) † −0.07 (−0.11; −0.02) * Liraglutide 1.8 mg 0.48 ± 0.33 0.36 ± 0.20 −0.09 (−0.15; −0.03) * −0.05 (−0.10; −0.01) * Placebo 0.44 ± 0.27 0.46 ± 0.29 Rosiglitazone 0.45 ± 0.29 0.40 ± 0.20 HOMA-B (%) Liraglutide 0.6 mg 51 ± 43.3 70 ± 88.6 15 (−19.10; 49.0) 11 (−16.7; 39.0) Liraglutide 1.2 mg 71 ± 254.3 99 ± 184.3 43 (8.10; 76.9) * 39 (10.3; 67.0) * Liraglutide 1.8 mg 56 ± 84.6 91 ± 108.2 34 (−0.23; 68.5) 30 (2.00; 58.6) * Placebo 56 ± 103.3 52 ± 107.3 Rosiglitazone 46 ± 36.2 59 ± 63.3 * P ≤ 0.05; † P < 0.0001. CI, confidence interval; HOMA, homeostatis model assessment; LOCF, last observation carried forward. \n\nBODY.RESULTS.BLOOD PRESSURE AND PULSE:\nAlthough decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. Pulse increases above baseline ranged from 2 to 4 beats/min with the three doses of liraglutide and 1 beat/min with rosiglitazone, while pulse decreased by 1 beat/min with placebo. Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).\n\nBODY.RESULTS.SAFETY:\nThe most common treatment-emergent AEs that were considered by investigators to be either possibly or probably related to liraglutide were gastrointestinal (diarrhoea, nausea, dyspepsia and constipation) and nervous system disorders (headache and dizziness), particularly during the first 4 weeks. Nausea was highest with liraglutide 1.2 mg (10.5%) and lowest with placebo (1.8%). Vomiting (4.4%) and diarrhoea (7.9%) were also higher with liraglutide 1.2 mg. Withdrawals because of nausea ranged from 0.9–2.2%, vomiting 0.4–0.9% and diarrhoea 0–1.3%. Nausea was more common with liraglutide compared with placebo and rosiglitazone, particularly during the first 4 weeks (Fig. 7). Frequency of nausea was less in the liraglutide 0.6 mg treatment group compared with the higher doses of liraglutide. Generally, the occurrence of nausea dissipated from 4 to 26 weeks of treatment in all groups using liraglutide (Fig. 7). FIGURE 7Percentage of subjects experiencing nausea over the course of the study. Key: liraglutide 0.6 mg with glimepiride: black line with filled circles; liraglutide 1.2 mg with glimepiride: black line with filled triangles; liraglutide 1.8 mg with glimepiride: grey line with hollow circles; glimepiride grey lines with filled squares; rosiglitazone and glimepiride: grey line with hollow triangles. The incidence of serious AEs ranged between 3 and 5%: placebo (3%), rosiglitazone (3%), liraglutide 0.6 mg (3%), liraglutide 1.2 mg (4%) and liraglutide 1.8 mg (5%). Most treatment-emergent serious AEs were judged by investigators to be unlikely to be related to trial products. No deaths were reported during the trial. One subject developed chronic pancreatitis whilst taking liraglutide 0.6 mg; the person had no reported previous history of pancreatitis. The subject continued on liraglutide therapy and completed the trial. At screening, five patients had been previously diagnosed with pancreatitis. As pancreatitis was not an exclusion criterion, these patients were randomized as follows: one to liraglutide 0.6 mg, one to liraglutide 1.2 mg, two to liraglutide 1.8 mg and one to rosiglitazone + glimepiride. All five patients completed the trial without reporting pancreatitis as an adverse event. Hypoglycaemia was infrequent with all treatments. One major hypoglycaemic episode (self-measured blood glucose = 3.0 mmol/l) occurred 9 days after treatment started in a subject receiving liraglutide 1.8 mg in combination with glimepiride. Although medical assistance was not needed, the subject required third-party assistance. The investigator judged the episode as likely to be related to glimepiride and reduced the dose from 4 to 3 mg after the incident. Minor hypoglycaemia occurred in < 10% of subjects for any treatment. The proportion of subjects experiencing minor hypoglycaemia during the trial was lowest with placebo (i.e. glimepiride monotherapy 2.6%; 0.17 events/subject-year), comparable with liraglutide 0.6 mg (5.2%, 0.17 events/subject-year) and rosiglitazone (4.3%, 0.12 events/subject-year) groups and similar between the liraglutide 1.2 mg (9.2%, 0.51 events/subject-year) and liraglutide 1.8 mg (8.1%, 0.47 events/subject-year) treatment groups. Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values. Antibodies to liraglutide were found in 9–13% of subjects treated with liraglutide. No significant effects of these antibodies on HbA1c were found in pooled analyses of four trials including the current study. There were no clinically relevant changes in ophthalmoscopy, biochemistry, urinalysis, haematology or ECG assessments. No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.\n\nBODY.DISCUSSION:\nTreatment with liraglutide plus glimepiride was superior to glimepiride monotherapy at all doses of liraglutide and superior to rosiglitazone plus glimepiride for the two higher liraglutide doses for improving HbA1c. Similar findings for reductions in FPG and PPG highlight improved 24-h glucose control with once-daily liraglutide, with substantially more subjects reaching glycaemic targets, particularly with liraglutide 1.8 mg. Improvements in pancreatic B-cell function were larger with liraglutide 1.2 and 1.8 mg compared with rosiglitazone. Liraglutide was well tolerated and occurrence of gastrointestinal AEs was low overall, particularly after week 4. Although rates of hypoglycaemia were low in all treatment groups (< 10%), minor hypoglycaemic events occurred more often in patients treated with glimepiride plus liraglutide 1.2 or 1.8 mg than with glimepiride alone. It should be noted, however, that patients treated with liraglutide 1.2 or 1.8 mg achieved a lower HbA1c than those receiving glimepiride monotherapy. At lower HbA1c levels, sulphonylureas are known to elicit hypoglycaemia more readily than at higher levels. In clinical practice it may be possible to reduce the dose of sulphonylurea (when used with liraglutide) to minimize risk of hypoglycaemia and maintain HbA1cimprovements. Although weight effects were modest, liraglutide produced more favourable weight effects compared with rosiglitazone, which produced substantial weight gain. In other studies with liraglutide, subjects adding a 1.8-mg dose to metformin lost 2.8 kg [14], while those adding both metformin and glimepiride lost 1.8 kg compared with placebo [15] (both over 26 weeks) and those on liraglutide monotherapy (1.8 mg) lost 2.45 kg over 52 weeks [16]. In our study, because sulphonylureas usually cause weight gain, inclusion or optimization of glimepiride but not metformin may have mitigated the weight benefits typically associated with liraglutide. Lack of weight effects could be secondary to lower baseline body weight, withdrawal of previous metformin treatment or defensive snacking to minimize risk of hypoglycaemia. It might have been expected that the greater weight gain with rosiglitazone compared with liraglutide 1.8 mg would be associated with a concurrent increase in insulin resistance with rosiglitazone. The absence of this effect could reflect the insulin-sensitizing nature of rosiglitazone. Improvements in pancreatic B-cell function associated with liraglutide are consistent with other studies [7–9]. Study strengths include inclusion of both placebo and active (rosiglitazone) comparators and that OGLAs were optimized (not maximized) before randomization to minimize risk of hypoglycaemia. Limitations of the study include short duration of the trial and restriction on glimepiride and rosiglitazone in some countries that precluded maximal dosing. The impact of using other GLP-1-based treatments [such as exenatide, or the dipeptidyl peptidase-4 (DPP-4) inhibitor, sitagliptin] with sulphonylureas in subjects with T2D has been studied. In a 30-week American trial where exenatide twice a day was added to sulphonylureas, HbA1c was reduced by 0.46% from baseline with 5 μg and 0.86% with 10 μg [17] compared with 1.1% with liraglutide 1.8 or 1.2 mg. This reduction in HbA1c with liraglutide is consistent with other LEAD trials investigating liraglutide as monotherapy or in combination with various OGLA drugs. In these trials, HbA1c was reduced by 1–1.5%[14,16,18–20]. Reductions in FPG with exenatide were 0.3 and 0.6 mmol/l from baseline with 5 μg and 10 μg, respectively, compared with 1.4 mmol/l with liraglutide 1.8 mg; weight loss of 1.6 kg occurred with exenatide 10 μg compared with 0.2 kg for liraglutide 1.8 mg [17]. Differences in weight effects may be as a result of lower baseline weight in this trial (82 kg) compared with exenatide (96 kg) and discontinuation of previous metformin therapy, unlike the exenatide trial where exenatide was added to previous sulphonylurea monotherapy [17]. Other large-scale trials with liraglutide in combination with sulphonylureas have demonstrated weight loss of 2–3 kg [18,20]. Withdrawals from exenatide trials ranged from 24–30% compared with 9–14% with liraglutide in this study. Nausea with exenatide ranged from 39% with 5 μg to 51% with 10 μg [17] compared with 10.5% for liraglutide. Furthermore, 41% were positive for anti-exenatide antibodies compared with 9–13% with anti-liraglutide antibodies. With sitagliptin 100 mg once daily for 24 weeks, HbA1c decreased by 0.3% from baseline in subjects receiving glimepiride, with 11% achieving an HbA1c < 7.0%[21]. Reductions in FPG and PPG from baseline were 0.05 and 1.4 mmol/l, respectively, while weight increased by 0.8 kg and the prevalence of nausea was < 1%. Although head-to-head trials are required to test true differences between these agents, the marked effects of liraglutide on FPG may be as a result of consistent blood levels of liraglutide maintained over 24 h compared with exenatide which has to be administered 60 min before breakfast and dinner and has a half-life of 1.5–3.6 h [22]. In a recent 26-week head-to-head trial comparing liraglutide with exenatide, liraglutide produced a 0.3% greater decrease on HbA1c (P < 0.0001) [20]. Because DPP-4 inhibitors inhibit the degradation of GLP-1, the efficacy of sitagliptin is dependent on levels of endogenous GLP-1 which is physiologically low compared with the much higher pharmacological levels of liraglutide. Pharmacological levels may be needed to induce satiety, weight loss and possibly larger HbA1c reductions. Liraglutide is an effective and well-tolerated once-daily human GLP-1 analogue that improves overall glycaemic control and indices of pancreatic B-cell function with minimal weight gain and risk of hypoglycaemia when used in combination with a sulphonylurea for T2D.\n\nBODY.COMPETING INTERESTS:\nThe study was funded by Novo Nordisk, the manufacturer of liraglutide. In collaboration with the investigators, Novo Nordisk was responsible for the study design, protocol, statistical analysis plans, oversight, analysis and reporting of the results. Data were recorded at the clinical centres and maintained by the sponsor. The LEAD-1 SU study group had full access to the data. Final responsibility for the decision to submit the manuscript for publication was the authors. MM has received lecture fees from Novo Nordisk, Servier, MSD; JS has received honoraria, grants and lecture fees from Novo Nordisk; MB, WMWB and NAK have no conflicts to declare; JS has received lecture fees from Novo Nordisk; MZ is employed by, and holds stock in, Novo Nordisk; TLT is employed by Novo Nordisk; SC is a member of the international advisory board on liraglutide for Novo Nordisk and has received lecture fees from Novo Nordisk.\n\n**Question:** Compared to Placebo plus glimepiride what was the result of Rosiglitazone plus glimepiride on HbA1c level at 26 weeks?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
146
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Liraglutide, a once-daily human GLP-1 analogue, added to a sulphonylurea over 26 weeks produces greater improvements in glycaemic and weight control compared with adding rosiglitazone or placebo in subjects with Type 2 diabetes (LEAD-1 SU)\n\n ABSTRACT.AIM:\nTo compare the effects of combining liraglutide (0.6, 1.2 or 1.8 mg/day) or rosiglitazone 4 mg/day (all n ≥ 228) or placebo (n = 114) with glimepiride (2–4 mg/day) on glycaemic control, body weight and safety in Type 2 diabetes.\n\nABSTRACT.METHODS:\nIn total, 1041 adults (mean ± sd), age 56 ± 10 years, weight 82 ± 17 kg and glycated haemoglobin (HbA1c) 8.4 ± 1.0% at 116 sites in 21 countries were stratified based on previous oral glucose-lowering mono : combination therapies (30 : 70%) to participate in a five-arm, 26-week, double-dummy, randomized study.\n\nABSTRACT.RESULTS:\nLiraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%) or rosiglitazone (−0.4%, P < 0.0001, baseline 8.4%) when added to glimepiride. Liraglutide 0.6 mg was less effective (−0.6%, baseline 8.4%). Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l). Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). Changes in body weight with liraglutide 1.8 mg (−0.2 kg, baseline 83.0 kg), 1.2 mg (+0.3 kg, baseline 80.0 kg) or placebo (−0.1 kg, baseline 81.9 kg) were less than with rosiglitazone (+2.1 kg, P < 0.0001, baseline 80.6 kg). Main adverse events for all treatments were minor hypoglycaemia (< 10%), nausea (< 11%), vomiting (< 5%) and diarrhoea (< 8%).\n\nABSTRACT.CONCLUSIONS:\nLiraglutide added to glimepiride was well tolerated and provided improved glycaemic control and favourable weight profile.\n\nBODY.INTRODUCTION:\nMost drugs that target Type 2 diabetes (T2D) also cause weight gain or hypoglycaemia, or both, with the risk increasing with combination therapy. Glucagon-like peptide-1 (GLP-1)-based therapies stimulate insulin secretion and reduce glucagon secretion only during hyperglycaemia. GLP-1 also slows gastric emptying and reduces appetite [1]. Although American Diabetes Association (ADA)/European Association for the Study of Diabetes (EASD) guidelines recommend lifestyle and metformin as initial therapy for T2D [2], sulphonylureas are used widely, particularly when metformin or thiazolidinediones are not tolerated. Glycaemic control eventually deteriorates with sulphonylureas while hypoglycaemia and weight gain are common [3]. Incretin therapy improves glycaemic control with low hypoglycaemic risk, while delayed gastric emptying and reduced appetite can reduce weight [1,4]. Liraglutide is a once-daily human GLP-1 analogue with 97% linear amino-acid sequence homology to human GLP-1 [5] and half-life of 13 h after subcutaneous administration that produces 24-h blood glucose control [6]. Liraglutide monotherapy for 14 weeks reduced glycated haemoglobin (HbA1c) by 1.7% and fasting plasma glucose (FPG) by 3.4 mmol/l without causing hypoglycaemia, along with weight loss (∼3 kg) compared with placebo [7]. Improvements in pancreatic B-cell function [7–9] and blood pressure [7], along with decreased glucagon secretion [7,10], also occurred. As part of the phase 3 programme [the Liraglutide Effect and Action in Diabetes (LEAD) programme] with liraglutide in > 4000 subjects with T2D as monotherapy or in combination therapy, this 26-week trial examined liraglutide plus glimepiride compared with either placebo or rosiglitazone added to glimepiride on glycaemic control and body weight.\n\nBODY.SUBJECTS AND METHODS.STUDY PARTICIPANTS:\nInclusion criteria: T2D treated with oral glucose-lowering agents (OGLAs) for ≥ 3 months; 18–80 years of age; HbA1c 7.0–11.0% (previous OGLA monotherapy) or 7.0–10.0% (previous OGLA combination therapy); body mass index (BMI) ≤ 45.0 kg/m2. Exclusion criteria: used insulin within 3 months, impaired liver or renal function, uncontrolled hypertension (≥ 180/100 mmHg), cancer or used any drugs apart from OGLAs likely to affect glucose concentrations. Subjects provided written informed consent. The study was conducted in accordance with good clinical practice guidelines and approved by independent ethics committees.\n\nBODY.SUBJECTS AND METHODS.STUDY DESIGN:\nThe study was a 26-week, double-blind, double-dummy, randomized, active-control, five-armed parallel (116 sites in 21 countries, primarily Europe and Asia) trial enrolling 1041 subjects (1–37 subjects per centre), all receiving glimepiride (2–4 mg/day) in combination with (Fig. 1): FIGURE 1Overview of trial design and treatment arms. one of three liraglutide doses [0.6, 1.2 or 1.8 mg, injected subcutaneously (Novo Nordisk, Bagsvaerd, Denmark) and rosiglitazone placebo];liraglutide placebo and rosiglitazone placebo;liraglutide placebo and rosiglitazone 4 mg/day (rosiglitazone; AvandiaTM; GlaxoSmithKline, London, UK). The doses of rosiglitazone and glimepiride used were determined by the highest doses approved in all participating counties. After discontinuing previous OGLAs except glimepiride, separate 2-week titration and maintenance periods with glimepiride (open-label) preceded randomization (Fig. 1). Subjects were stratified according to previous treatment (monotherapy or combination therapy). After randomization, 2-week treatment titration and 24-week treatment (maintenance) phases (Fig. 1) were completed. Liraglutide was up-titrated weekly in 0.6-mg increments until allocated doses were reached. Glimepiride could be adjusted between 2 and 4 mg/day in case of hypoglycaemia or other adverse events (AEs), while other drug doses were fixed. Liraglutide (active and placebo) was supplied in 3-ml pre-filled pens with 31G needles (Novo Nordisk). Subjects were encouraged to inject liraglutide into the upper arm, thigh or abdomen at the same time each day. Rosiglitazone and glimepiride were taken in the morning or with the first meal.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.EFFICACY:\nThe primary endpoint was change from baseline HbA1c after 26 weeks of treatment. Secondary endpoints included: percentages of subjects reaching HbA1c (< 7.0%, ≤ 6.5%), FPG (5.0 to ≤ 7.2 mmol/l) and postprandial plasma glucose (PPG; 10.0 mmol/l) targets [11–13]; changes in body weight, FPG, mean PPG, indices of pancreatic B-cell function [pro-insulin : insulin ratio and homeostasis model assessment (HOMA)-B], HOMA-insulin resistance (HOMA-IR) and blood pressure (BP). HbA1c was measured centrally (MDS Pharma Services, King of Prussia, PA, USA) by high performance liquid chromatography while plasma glucose (PG) was self-measured using MediSense® glucose meters (Abbott Diagnostics Inc., Abbott Park, IL, USA). Insulin and C-peptide were measured by chemiluminescence, proinsulin by ELISA, while glucagon was measured in aprotinin-treated plasma by radioimmunoassay. The proinsulin : insulin ratio was calculated from fasting insulin and fasting proinsulin. HOMA-B and HOMA-IR were both calculated from FPG and fasting insulin. Samples measured centrally were collected and transported according to detailed procedures in the MDS Pharma Services manual. Samples stored at ambient temperature were shipped by courier to the central laboratory on the same day as collection, while frozen samples were shipped every 3 weeks.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.SAFETY:\nSafety variables included hypoglycaemic episodes based on PG levels (< 3.1 mmol/l), liraglutide antibodies including cross-reacting and neutralizing antibodies, tolerability (gastrointestinal complaints) and pulse. AEs, vital signs, electrocardiogram (ECG), biochemical and haematology measures including calcitonin were also monitored. Self-treated hypoglycaemic episodes were classified as minor, while those requiring third-party assistance were considered major. Serum antibodies against liraglutide were measured by radioimmunoprecipitation assay.\n\nBODY.SUBJECTS AND METHODS.STATISTICAL ANALYSES:\nAll efficacy and safety analyses were based on intent-to-treat criteria, defined as subjects who were exposed to ≥ 1 dose of trial product(s). Efficacy endpoints were analysed by ancova with treatment, country and previous glucose-lowering treatment as fixed effects and baseline values as covariates. Missing data were imputed by last observation carried forward (LOCF). Sample size calculations were based on predicted HbA1c and body weight after trial completion. As the three liraglutide + glimepiride groups were to be compared with both rosiglitazone + glimepiride and glimepiride monotherapy, two calculations were performed. These sample size calculations assumed a standard deviation of 1.2% of HbA1c, the non-inferiority/superiority margin vs. active control was set to 0.4% and the difference to detect (superiority vs. placebo) was set to 0.5%. For body weight, a coefficient of variation of 3% (based on phase 2a trials for liraglutide) and a difference to detect of 3% were assumed. A combined power (calculated as the product of the marginal powers for HbA1c and body weight) of at least 85% was required. These calculations indicated that at least 168 and 81 patients completing the study would be needed for the combination and glimepiride monotherapy groups, respectively. Assuming a drop-out rate of 25%, targets for randomization were 228 in each of the combination therapy groups and 114 in the placebo group (total n = 1026). To protect against Type 1 errors, HbA1c was analysed using hierarchical testing for descending doses of liraglutide. First, superiority of liraglutide 1.8 mg to placebo was tested and, only if superior to placebo, non-inferiority to rosiglitazone was tested. If non-inferiority was obtained, superiority to rosiglitazone for liraglutide 1.8 mg was tested and superiority to placebo for liraglutide 1.2 mg was tested. If superiority was confirmed, non-inferiority to rosiglitazone would be tested and so on (i.e. testing sequence was stopped when hypotheses could not be rejected). Superiority was concluded when upper limits of two-sided 95% confidence intervals (CIs) for treatment differences were below 0%; non-inferiority was concluded if these values were < 0.4%; for secondary endpoints, Type 1 errors were controlled by estimating simultaneous CIs using Dunnett's method. Proportions of subjects achieving HbA1c (HbA1c < 7.0%, and ≤ 6.5%) and FPG (5.0 ≤ FPG ≤ 7.2 mmol/l) targets [13] were compared between treatments using logistic regression with allocated treatment and baseline values as covariates. Chi-square analyses assessed differences in treatments for percentages of subjects achieving no, one, two or three PPG values < 10 mmol/l [13]. Hypoglycaemic episodes were analysed under the assumption that number per subject were negatively binomially distributed using a generalized linear model, including treatment and country as fixed effects. Other safety data were compared by descriptive statistics. Values for descriptive statistics are expressed as means ± sd, while ancova results are expressed as least square means ± SEM or with 95% CI unless otherwise noted. Significance levels were set to 5% for two-sided tests and 2.5% for one-sided tests.\n\nBODY.RESULTS.DISPOSITION AND DEMOGRAPHICS:\nThe treatment groups were well balanced (Table 1). Of 1712 subjects screened, 1041 were randomized and 1040 were exposed to trial drugs; 147 subjects (14.1%) withdrew (Fig. 2). Withdrawals were higher with placebo (27%) and rosiglitazone treatment (16%) compared with liraglutide 0.6 mg (11%), liraglutide 1.2 mg (14%) and liraglutide 1.8 mg (9%) treatment. Thirty-eight subjects (3.7%) withdrew as a result of AEs (Fig. 2). Table 1 Demographic characteristics of study participants Liraglutide 0.6 mg ( n = 233) Liraglutide 1.2 mg ( n = 228) Liraglutide 1.8 mg ( n = 234) Placebo ( n = 114) Rosiglitazone ( n = 232) Male : female (%) 54 : 46 45 : 55 53 : 47 47 : 53 47 : 53 Age (years) 55.7 ± 9.9 57.7 ± 9.0 55.6 ± 10.0 54.7 ± 10.0 56.0 ± 9.8 Duration of diabetes (years) 6.5 (4.0,10.2) 6.7 (4.0,10.7) 6.5 (3.7,10.5) 6.5 (4.5,10.6) 6.6 (4.3,10.7) Previous on mono : combi (%) 30 : 70 31 : 69 27 : 73 32 : 68 32 : 68 FPG (mmol/l) 10.0 ± 2.4 9.8 ± 2.7 9.7 ± 2.4 9.5 ± 2.0 9.9 ± 2.5 HbA 1c (%) 8.4 ± 1.0 8.5 ± 1.1 8.5 ± 0.9 8.4 ± 1.0 8.4 ± 1.0 Diabetic retinopathy (%) 17.2 14.9 12.0 13.2 16.4 Hypertension (%) 69.1 68.0 69.7 64.9 66.8 BMI (kg/m 2 ) 30.0 ± 5.0 29.8 ± 5.1 30.0 ± 5.1 30.3 ± 5.4 29.4 ± 4.8 Weight (kg) 82.6 ± 17.7 80.0 ± 17.1 83.0 ± 18.1 81.9 ± 17.1 80.6 ± 17.0 Systolic blood pressure (mmHg) 131 ± 16 133 ± 15 132 ± 16 131 ± 15.3 133 ± 15 Data are mean ± sd and percentages, except for duration of diabetes, where data are median, 25th and 75th percentile. BMI, body mass index; FPG, fasting plasma glucose; HbA 1c , glycated haemoglobin; mono : combi, previous treatment with either monotherapy or combination therapy; sd , standard deviation. FIGURE 2Flow of patients through the study.\n\nBODY.RESULTS.EFFICACY.HBA:\nHbA1c decreased rapidly with all doses of liraglutide when added to glimepiride compared with either rosiglitazone or placebo (i.e. glimepiride monotherapy), irrespective of previous therapy. The greatest decreases occurred with liraglutide 1.2 and 1.8 mg (Fig. 3a–c). After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). Liraglutide 0.6 mg was non-inferior to rosiglitazone. Rosiglitazone also was superior to placebo (P < 0.0001). FIGURE 3Mean glycated haemoglobin (HbA1c) by treatment and week (intent-to-treat population with last observation carried forward): (a) overall population; (b) previously on monotherapy; or (c) previously on combination therapy; (d) mean changes in HbA1c from baseline after 26 weeks of treatment. Keys: (a–c) liraglutide 0.6 mg: grey dotted line with squares; liraglutide 1.2 mg: black solid line with triangles; liraglutide 1.8 mg: black dotted line with squares; rosiglitazone: grey solid line with circles; placebo: black solid line with circles. (d) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. ****P < 0.0001 compared with placebo; ††††P < 0.0001 compared with rosiglitazone. HbA1c decreases were greater for subjects who entered from monotherapy compared with combination therapy (Fig. 3d). However, because the increase with placebo was higher for individuals entering on combination therapy (0.7 vs. 0.23%), the differences between treatment groups in favour of liraglutide were similar irrespective of whether subjects were treated previously with monotherapy or combination therapy. Neither age, gender nor BMI affected these trends.\n\nBODY.RESULTS.EFFICACY.PERCENTAGE REACHING AN HBA:\nThe percentage of subjects reaching ADA [2] and International Diabetes Federation (IDF)/American Association of Clinical Endocrinologists (AACE) [11,12] treatment HbA1c goals with liraglutide was dose dependent (Fig. 4). At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). FIGURE 4Subjects achieving specified glycated haemoglobin (HbA1c) levels: (a) percentage reaching HbA1c < 7.0% (American Diabetes Association/European Association for the Study of Diabetes target); (b) percentage reaching HbA1c < 6.5% (International Diabetes Federation/American Association of Clinical Endocrinologists targets); (c) cumulative distribution of HbA1c at 26 weeks for the intent-to-treat (ITT) population; and (d) for the ITT last observation carried forward (LOCF) population. Keys: (a, b) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. (c, d) liraglutide 0.6 mg: pale grey solid line; liraglutide 1.2 mg: grey solid line, liraglutide 1.8 mg: black solid line; rosiglitazone: dotted black line; placebo: dotted grey line; baseline visit: long dashed black line. ****P < 0.0001 or **P < 0.01 compared with placebo; ††††P < 0.0001 or †††P = 0.0005 compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.FASTING PLASMA GLUCOSE:\nBy week 2, subjects treated with liraglutide had rapid and larger decreases in FPG vs. comparator treatment. At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001), while only liraglutide 1.2 or 1.8 mg produced greater reductions than rosiglitazone. FPG treatment differences to placebo were 1.7 mmol/l for liraglutide 0.6 mg and 2.6 mmol/l for both liraglutide 1.2 and 1.8 mg. An 0.7-mmol/l greater reduction in FPG was achieved with either liraglutide 1.2 or 1.8 mg compared with rosiglitazone (P ≤ 0.006) after 26 weeks. FIGURE 5Mean changes from baseline in fasting plasma glucose after 26 weeks of treatment. ****P < 0.0001 compared with placebo; ††P < 0.01 compared with rosiglitazone. The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). The liraglutide 1.2 and 1.8 mg treatment groups also had more subjects achieving the same FPG target at end of treatment compared with rosiglitazone (26%) (P = 0.007 and P = 0.01, respectively).\n\nBODY.RESULTS.EFFICACY.POSTPRANDIAL PLASMA GLUCOSE:\nPPG was reduced similarly after each meal. The greatest reductions in mean PPG values from baseline (average of values obtained 90 min after breakfast, lunch and evening meal) occurred with liraglutide 1.2 mg (2.5 mmol/l) and liraglutide 1.8 mg (2.7 mmol/l). By comparison, the reduction from baseline in mean PPG values was 1.8 mmol/l for rosiglitazone and liraglutide 0.6 mg and 0.4 mmol/l for placebo. Treatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001) and greater with liraglutide 1.2 mg (0.64 mmol/l; P = 0.043) and 1.8 mg (0.87 mmol/l;P = 0.0022) compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.PPG MEASUREMENTS < 10.0 MMOL/L:\nThe percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.\n\nBODY.RESULTS.BODY WEIGHT:\nMean weight at baseline was 81.6 kg. Mean reductions in weight from baseline to end of treatment were 0.2 kg with liraglutide 1.8 mg and 0.1 kg with placebo treatment, while increases occurred with either liraglutide 0.6 mg (0.7 kg), liraglutide 1.2 mg (0.3 kg) or rosiglitazone (2.1 kg) (Fig. 6). Unlike rosiglitazone, weight did not increase substantially with liraglutide and the differences between rosiglitazone and liraglutide were statistically significant (−2.3 to −1.4 kg; P < 0.0001), although there were no significant differences compared with placebo. Gender appeared to have no influence on the results, as indicated when added as a fixed effect in the ancova model. FIGURE 6Mean changes in body weight from baseline after 26 weeks of treatment. *P < 0.05 compared with placebo; ††††P < 0.0001 compared with rosiglitazone.\n\nBODY.RESULTS.INDICES OF PANCREATIC B-CELL FUNCTION AND INSULIN RESISTANCE:\nReductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051). There were no significant differences between treatments for HOMA-IR. Table 2 Selected indices of pancreatic B-cell function Variable Treatment Baseline Week 26 (LOCF) Least square difference from placebo (95% CI) Least square difference from rosiglitazone (95% CI) Proinsulin : insulin ratio Liraglutide 0.6 mg 0.42 ± 0.22 0.38 ± 0.24 −0.05 (−0.11; 0.00) −0.02 (−0.06; 0.03) Liraglutide 1.2 mg 0.45 ± 0.31 0.33 ± 0.20 −0.10 (−0.16; −0.05) † −0.07 (−0.11; −0.02) * Liraglutide 1.8 mg 0.48 ± 0.33 0.36 ± 0.20 −0.09 (−0.15; −0.03) * −0.05 (−0.10; −0.01) * Placebo 0.44 ± 0.27 0.46 ± 0.29 Rosiglitazone 0.45 ± 0.29 0.40 ± 0.20 HOMA-B (%) Liraglutide 0.6 mg 51 ± 43.3 70 ± 88.6 15 (−19.10; 49.0) 11 (−16.7; 39.0) Liraglutide 1.2 mg 71 ± 254.3 99 ± 184.3 43 (8.10; 76.9) * 39 (10.3; 67.0) * Liraglutide 1.8 mg 56 ± 84.6 91 ± 108.2 34 (−0.23; 68.5) 30 (2.00; 58.6) * Placebo 56 ± 103.3 52 ± 107.3 Rosiglitazone 46 ± 36.2 59 ± 63.3 * P ≤ 0.05; † P < 0.0001. CI, confidence interval; HOMA, homeostatis model assessment; LOCF, last observation carried forward. \n\nBODY.RESULTS.BLOOD PRESSURE AND PULSE:\nAlthough decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. Pulse increases above baseline ranged from 2 to 4 beats/min with the three doses of liraglutide and 1 beat/min with rosiglitazone, while pulse decreased by 1 beat/min with placebo. Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).\n\nBODY.RESULTS.SAFETY:\nThe most common treatment-emergent AEs that were considered by investigators to be either possibly or probably related to liraglutide were gastrointestinal (diarrhoea, nausea, dyspepsia and constipation) and nervous system disorders (headache and dizziness), particularly during the first 4 weeks. Nausea was highest with liraglutide 1.2 mg (10.5%) and lowest with placebo (1.8%). Vomiting (4.4%) and diarrhoea (7.9%) were also higher with liraglutide 1.2 mg. Withdrawals because of nausea ranged from 0.9–2.2%, vomiting 0.4–0.9% and diarrhoea 0–1.3%. Nausea was more common with liraglutide compared with placebo and rosiglitazone, particularly during the first 4 weeks (Fig. 7). Frequency of nausea was less in the liraglutide 0.6 mg treatment group compared with the higher doses of liraglutide. Generally, the occurrence of nausea dissipated from 4 to 26 weeks of treatment in all groups using liraglutide (Fig. 7). FIGURE 7Percentage of subjects experiencing nausea over the course of the study. Key: liraglutide 0.6 mg with glimepiride: black line with filled circles; liraglutide 1.2 mg with glimepiride: black line with filled triangles; liraglutide 1.8 mg with glimepiride: grey line with hollow circles; glimepiride grey lines with filled squares; rosiglitazone and glimepiride: grey line with hollow triangles. The incidence of serious AEs ranged between 3 and 5%: placebo (3%), rosiglitazone (3%), liraglutide 0.6 mg (3%), liraglutide 1.2 mg (4%) and liraglutide 1.8 mg (5%). Most treatment-emergent serious AEs were judged by investigators to be unlikely to be related to trial products. No deaths were reported during the trial. One subject developed chronic pancreatitis whilst taking liraglutide 0.6 mg; the person had no reported previous history of pancreatitis. The subject continued on liraglutide therapy and completed the trial. At screening, five patients had been previously diagnosed with pancreatitis. As pancreatitis was not an exclusion criterion, these patients were randomized as follows: one to liraglutide 0.6 mg, one to liraglutide 1.2 mg, two to liraglutide 1.8 mg and one to rosiglitazone + glimepiride. All five patients completed the trial without reporting pancreatitis as an adverse event. Hypoglycaemia was infrequent with all treatments. One major hypoglycaemic episode (self-measured blood glucose = 3.0 mmol/l) occurred 9 days after treatment started in a subject receiving liraglutide 1.8 mg in combination with glimepiride. Although medical assistance was not needed, the subject required third-party assistance. The investigator judged the episode as likely to be related to glimepiride and reduced the dose from 4 to 3 mg after the incident. Minor hypoglycaemia occurred in < 10% of subjects for any treatment. The proportion of subjects experiencing minor hypoglycaemia during the trial was lowest with placebo (i.e. glimepiride monotherapy 2.6%; 0.17 events/subject-year), comparable with liraglutide 0.6 mg (5.2%, 0.17 events/subject-year) and rosiglitazone (4.3%, 0.12 events/subject-year) groups and similar between the liraglutide 1.2 mg (9.2%, 0.51 events/subject-year) and liraglutide 1.8 mg (8.1%, 0.47 events/subject-year) treatment groups. Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values. Antibodies to liraglutide were found in 9–13% of subjects treated with liraglutide. No significant effects of these antibodies on HbA1c were found in pooled analyses of four trials including the current study. There were no clinically relevant changes in ophthalmoscopy, biochemistry, urinalysis, haematology or ECG assessments. No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.\n\nBODY.DISCUSSION:\nTreatment with liraglutide plus glimepiride was superior to glimepiride monotherapy at all doses of liraglutide and superior to rosiglitazone plus glimepiride for the two higher liraglutide doses for improving HbA1c. Similar findings for reductions in FPG and PPG highlight improved 24-h glucose control with once-daily liraglutide, with substantially more subjects reaching glycaemic targets, particularly with liraglutide 1.8 mg. Improvements in pancreatic B-cell function were larger with liraglutide 1.2 and 1.8 mg compared with rosiglitazone. Liraglutide was well tolerated and occurrence of gastrointestinal AEs was low overall, particularly after week 4. Although rates of hypoglycaemia were low in all treatment groups (< 10%), minor hypoglycaemic events occurred more often in patients treated with glimepiride plus liraglutide 1.2 or 1.8 mg than with glimepiride alone. It should be noted, however, that patients treated with liraglutide 1.2 or 1.8 mg achieved a lower HbA1c than those receiving glimepiride monotherapy. At lower HbA1c levels, sulphonylureas are known to elicit hypoglycaemia more readily than at higher levels. In clinical practice it may be possible to reduce the dose of sulphonylurea (when used with liraglutide) to minimize risk of hypoglycaemia and maintain HbA1cimprovements. Although weight effects were modest, liraglutide produced more favourable weight effects compared with rosiglitazone, which produced substantial weight gain. In other studies with liraglutide, subjects adding a 1.8-mg dose to metformin lost 2.8 kg [14], while those adding both metformin and glimepiride lost 1.8 kg compared with placebo [15] (both over 26 weeks) and those on liraglutide monotherapy (1.8 mg) lost 2.45 kg over 52 weeks [16]. In our study, because sulphonylureas usually cause weight gain, inclusion or optimization of glimepiride but not metformin may have mitigated the weight benefits typically associated with liraglutide. Lack of weight effects could be secondary to lower baseline body weight, withdrawal of previous metformin treatment or defensive snacking to minimize risk of hypoglycaemia. It might have been expected that the greater weight gain with rosiglitazone compared with liraglutide 1.8 mg would be associated with a concurrent increase in insulin resistance with rosiglitazone. The absence of this effect could reflect the insulin-sensitizing nature of rosiglitazone. Improvements in pancreatic B-cell function associated with liraglutide are consistent with other studies [7–9]. Study strengths include inclusion of both placebo and active (rosiglitazone) comparators and that OGLAs were optimized (not maximized) before randomization to minimize risk of hypoglycaemia. Limitations of the study include short duration of the trial and restriction on glimepiride and rosiglitazone in some countries that precluded maximal dosing. The impact of using other GLP-1-based treatments [such as exenatide, or the dipeptidyl peptidase-4 (DPP-4) inhibitor, sitagliptin] with sulphonylureas in subjects with T2D has been studied. In a 30-week American trial where exenatide twice a day was added to sulphonylureas, HbA1c was reduced by 0.46% from baseline with 5 μg and 0.86% with 10 μg [17] compared with 1.1% with liraglutide 1.8 or 1.2 mg. This reduction in HbA1c with liraglutide is consistent with other LEAD trials investigating liraglutide as monotherapy or in combination with various OGLA drugs. In these trials, HbA1c was reduced by 1–1.5%[14,16,18–20]. Reductions in FPG with exenatide were 0.3 and 0.6 mmol/l from baseline with 5 μg and 10 μg, respectively, compared with 1.4 mmol/l with liraglutide 1.8 mg; weight loss of 1.6 kg occurred with exenatide 10 μg compared with 0.2 kg for liraglutide 1.8 mg [17]. Differences in weight effects may be as a result of lower baseline weight in this trial (82 kg) compared with exenatide (96 kg) and discontinuation of previous metformin therapy, unlike the exenatide trial where exenatide was added to previous sulphonylurea monotherapy [17]. Other large-scale trials with liraglutide in combination with sulphonylureas have demonstrated weight loss of 2–3 kg [18,20]. Withdrawals from exenatide trials ranged from 24–30% compared with 9–14% with liraglutide in this study. Nausea with exenatide ranged from 39% with 5 μg to 51% with 10 μg [17] compared with 10.5% for liraglutide. Furthermore, 41% were positive for anti-exenatide antibodies compared with 9–13% with anti-liraglutide antibodies. With sitagliptin 100 mg once daily for 24 weeks, HbA1c decreased by 0.3% from baseline in subjects receiving glimepiride, with 11% achieving an HbA1c < 7.0%[21]. Reductions in FPG and PPG from baseline were 0.05 and 1.4 mmol/l, respectively, while weight increased by 0.8 kg and the prevalence of nausea was < 1%. Although head-to-head trials are required to test true differences between these agents, the marked effects of liraglutide on FPG may be as a result of consistent blood levels of liraglutide maintained over 24 h compared with exenatide which has to be administered 60 min before breakfast and dinner and has a half-life of 1.5–3.6 h [22]. In a recent 26-week head-to-head trial comparing liraglutide with exenatide, liraglutide produced a 0.3% greater decrease on HbA1c (P < 0.0001) [20]. Because DPP-4 inhibitors inhibit the degradation of GLP-1, the efficacy of sitagliptin is dependent on levels of endogenous GLP-1 which is physiologically low compared with the much higher pharmacological levels of liraglutide. Pharmacological levels may be needed to induce satiety, weight loss and possibly larger HbA1c reductions. Liraglutide is an effective and well-tolerated once-daily human GLP-1 analogue that improves overall glycaemic control and indices of pancreatic B-cell function with minimal weight gain and risk of hypoglycaemia when used in combination with a sulphonylurea for T2D.\n\nBODY.COMPETING INTERESTS:\nThe study was funded by Novo Nordisk, the manufacturer of liraglutide. In collaboration with the investigators, Novo Nordisk was responsible for the study design, protocol, statistical analysis plans, oversight, analysis and reporting of the results. Data were recorded at the clinical centres and maintained by the sponsor. The LEAD-1 SU study group had full access to the data. Final responsibility for the decision to submit the manuscript for publication was the authors. MM has received lecture fees from Novo Nordisk, Servier, MSD; JS has received honoraria, grants and lecture fees from Novo Nordisk; MB, WMWB and NAK have no conflicts to declare; JS has received lecture fees from Novo Nordisk; MZ is employed by, and holds stock in, Novo Nordisk; TLT is employed by Novo Nordisk; SC is a member of the international advisory board on liraglutide for Novo Nordisk and has received lecture fees from Novo Nordisk.\n\n**Question:** Compared to Rosiglitazone plus glimepiride what was the result of Liraglutide (all doses) plus glimepiride on Decreases in diastolic blood pressure?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
205
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Immunogenicity and Smoking Cessation Outcomes for a Novel Nicotine Immunotherapeutic\n\n ABSTRACT:\nNicVAX®, a nicotine vaccine (3'AmNic-rEPA), has been clinically evaluated to determine if higher antibody concentrations are associated with higher smoking abstinence rates and if doses and frequency of administration are associated with increased antibody response. This randomized, double-blinded, placebo-controlled multicenter clinical trial (N=301 smokers) tested 200 and 400 μg doses administered 4 or 5 times over 6 months compared to placebo. 3'AmNic-rEPA recipients with the highest serum anti-nicotine antibody response (top 30% by AUC) were significantly more likely to attain 8 weeks continuous abstinence from weeks 19 through 26 than the placebo recipients (24.6% vs. 12.0%, p=0.024, OR=2.69, 95% CI, 1.14–6.37). The 5 injection 400 μg dose regimen had the greatest antibody response and had significantly higher abstinence rates than placebo. This study demonstrates proof-of-concept that 3'AmNic-rEPA elicits antibodies to nicotine and is associated with higher continuous abstinence rates, justifying its further development as a treatment for nicotine dependence.\n\nBODY.INTRODUCTION:\nWorldwide, smoking prevalence is 1.2 billion and approximately 5 million people die each year of smoking caused illnesses [1]. The global rate of smoking and smoking related deaths is anticipated to increase over the next 20 years unless significant public health measures are instituted. These include effective cessation interventions such as pharmacological treatments, which improve cessation rates by 1.5 to 3 fold over placebo intervention [2, 3]. Approved pharmacotherapies (e.g., nicotine replacements, bupropion SR, varenicline) for smoking cessation act on the central nervous system, each with a different mechanism of action. Other novel medications are being developed including immunotherapeutics targeting nicotine. Nicotine conjugate vaccines stimulate the immune system to develop nicotine specific antibodies (Abs) using an immunogen comprised of nicotine covalently linked to a larger carrier protein. Conceptually, the mechanism of action is that anti-nicotine antibodies bind nicotine molecules and the resulting complex is too large to cross the blood-brain barrier. With increasing Ab levels more nicotine is captured and sequestered in the blood and prevented from entering the brain, leading to less reinforcing effects from nicotine. Animal studies have demonstrated that passive or active immunization results in approximately 30% to 90% less nicotine entering the brain compared to control rats [4–7] and attenuated locomotor [4, 5] and behavioral [8, 9] responses to nicotine. Furthermore, vaccination reduced nicotine elimination from the body in a study with rats [10, 11], which may also contribute to reduced smoking. Although human studies are limited, published data evaluating different nicotine vaccines support the general concept that nicotine vaccines can be effective for smoking cessation in some smokers [12, 13]. Unfortunately, these studies either had small sample sizes [12], did not use an intent-to-treat population of smokers [13] or did not perform statistical analysis of the data [14]. The primary aim of the present study was to establish the proof-of-concept that (i) anti-nicotine antibodies are useful as an aid to smoking cessation and (ii) higher serum anti-nicotine antibody concentrations are associated with higher abstinence rates in an intent-to-treat population of smokers. One of the challenges with immunotherapeutics, such as vaccines, is attainment of therapeutic levels of antibody in most people. Therefore, this study tested two different doses of 3'-aminomethylnicotine P. aeruginosa r-exoprotein A - NicVAX (3'AmNic-rEPA) to identify a dose and schedule for further development: 200 and 400 μg across two different schedules (4 or 5 injections) compared to placebo for immunogenicity, efficacy and safety.\n\nBODY.RESULTS:\nA total of 301 subjects were randomized. Figure 2 shows the disposition and number of subjects within each treatment group. No significant group differences were observed in the demographic or smoking history by treatment or antibody levels (see Table 1).\n\nBODY.RESULTS.COMPLIANCE:\nAll 301 subjects received injection 1, 96.7%, 84.1%, 72.4% and 61.2% received injections 2, 3, 4 and 5 (Schedule 2 subjects only), respectively. No significant differences were observed across treatment groups for subjects receiving injections 2 through 4 for Schedules 1 and 2. Mean in-study duration was 286 ± 121 days.\n\nBODY.RESULTS.PROOF-OF-CONCEPT.EFFECTS OF HIGH AB ON ABSTINENCE:\nHigh Ab responders to 3'AmNic-rEPA were defined as the top 30% of responders by AUC (0 to 26 weeks) and the low Ab group as the remaining bottom 70% of responders. 3'AmNic-rEPA recipients in the high Ab group were significantly more likely to attain 8 weeks of continuous abstinence from weeks 19 through 26 than placebo (24.6% vs. 12.0%, p=0.024, OR=2.69, 95% CI, 1.14–6.37). No significant differences were observed between the 3'AmNic-rEPA low Ab group vs. placebo (9.3% vs. 12.0%., p=0.46). As a secondary outcome, continuous abstinence rate (CAR) to 52 weeks were evaluated from weeks 19 to 52 and were significantly higher for the high Ab group vs. placebo (19.7% vs. 10.0%, p=0.044, OR=2.64, 95% CI, 1.03–6.79) with no significant difference between the low Ab group and placebo (7.1% vs. 10.0%, p=0.43). 7-day point prevalence abstinence results show that subjects with high Ab levels were significantly more likely to abstain compared to placebo at 26 weeks (36.1% vs. 16.0%, p=0.0024, OR=3.30, 95% CI, 1.53–7.13) and 52 weeks (31.1% vs. 12.0%, p=0.0021, OR=3.69, 95% CI, 0.42–2.14). No significant differences were observed in the point prevalence abstinence rates between the low Ab group and placebo at 26 and 52 weeks (12.9% vs. 16.0%, p=0.51 and 11.4% vs. 12.0%, p=0.89, respectively). As shown in Figure 3A, abstinence rates remained essentially the same following the target quit date (TQD) for the duration of the study. To further validate the proof-of-concept, the relationship between abstinence during the final eight weeks of the study and anti-nicotine Ab concentrations (AUC) is shown in Figure 3B for all subjects receiving the vaccine. Continuous abstinence rates from week 45–52 are displayed for each 10th percentile increase in AUC. The proportion of abstinent subjects increased with increasing AUC percentile, and the ordered ranking maintained.\n\nBODY.RESULTS.PROOF-OF-CONCEPT.EFFECTS OF HIGH AB ON TIME TO CONTINUOUS ABSTINENCE:\nAn exploratory analysis examined the rate and time to continuous abstinence through study end (Figure 3C). Most smokers quit soon after the TQD, with a clear divergence of the high Ab group from the placebo and low Ab group. Among the 18 high Ab continuous abstainers, 12 initiated abstinence prior to the primary endpoint while 6 initiated their abstinence after the start of the primary endpoint. Furthermore, 3 of the 15 subjects who abstained during the primary endpoint relapsed by the end of the study. Cox proportional hazards analysis demonstrated superiority of the high Ab group compared to placebo (p=0.0069, hazard ratio of 2.76).\n\nBODY.RESULTS.PROOF-OF-CONCEPT.EVALUATION OF LONG-TERM ABSTINENCE:\nAs most subjects achieved abstinence shortly after their TQD, additional analyses were undertaken to evaluate prolonged abstinence to 6 and 12 months allowing for a 2 week grace period after the TQD [15]. (Prolonged abstinence is defined as not a single puff during the period from 2 weeks after the TQD for 20 weeks and 44 weeks, respectively). Prolonged abstinence rates to 6 months were significantly higher in the high Ab group vs. placebo (19.7% vs. 6.0%, p=0.0060, OR=4.41, 95% CI, 1.53–12.71) with no significant differences between the placebo and low Ab groups (7.9% vs. 6.0%, p=0.60). Subjects with high Ab were also significantly more likely to be abstinent for 12 months compared to placebo (18.0% vs. 6.0%, p=0.014; OR of 3.84; 95% CI, 1.32–11.20). The low Ab group did not differ significantly from placebo (7.1% vs. 6.0%, p=0.67).\n\nBODY.RESULTS.PROOF-OF-CONCEPT.CIGARETTE SMOKING IN NON-QUITTERS:\nStatistically significant differences were observed in reduction of daily cigarette consumption and cotinine between non-abstainers (weeks 19–52) with high Ab levels and non-abstainers in the placebo group (p=0.0015 and 0.019, respectively; see Figures 4A and 4C. The difference in the median reduction in cigarette consumption, following the TQD, between the high Ab non-abstainers and placebo non-abstainers was on average 4.6 cigarettes per day. Similarly, cotinine geometric mean concentrations (GMCs) were 19.0% lower on average following the target quit date in the high Ab non-abstainers as compared to placebo non-abstainers. Median cigarettes per day and cotinine GMC for placebo and low Ab group are nearly identical over the study period. Differences in mean CO were not observed across all three groups (see Figure 4B). 15 out of the 301 subjects smoked more than a 2-fold higher number of cigarettes following the TQD as compared to baseline with no significant difference between the placebo (n=4/100) and 3'AmNic-rEPA (n=11/201) groups. The highest smoking levels observed post-TQD were 5-fold baseline in the placebo and 4-fold baseline in the 3'AmNic-rEPA groups. Elevated smoking levels for 11 of these 15 subjects returned to below 2-fold baseline by study end. Subjects with remaining elevated levels were similar between the 3'AmNic-rEPA group (n=3/201) and placebo (n=1/100). Individual subjects with CO levels elevated by 2-fold or higher than baseline were also assessed and results were similar with no significant difference between the placebo (n=5/100) and 3'AmNic-rEPA (n=13/201) groups.\n\nBODY.RESULTS.PROOF-OF-CONCEPT.WITHDRAWAL SYMPTOMS:\nSignificant differences were not observed in overall withdrawal severity between placebo and the high and low Ab groups (p >0.22).\n\nBODY.RESULTS.EFFECTS OF DOSE AND SCHEDULE.IMMUNOGENICITY AND EFFICACY BY STUDY GROUP:\nFigure 5 depicts immune response by study group from baseline to week 52. Anti-nicotine Ab GMCs increased across all active treatment groups after each vaccination, with each subsequent dose resulting in a higher Ab response than the previous dose. Schedule 2 showed a higher initial increase in Ab concentrations. The 400 μg/Schedule 2 group demonstrated the highest Ab concentrations. However, no significant differences (p >0.05) were observed in AUC, Cmax and Cavg across the treatment groups to 26 or 52 weeks likely because the study was not powered to detect such difference. An intent-to-treat analysis demonstrated that the Schedule 2, 400 μg dose group had significantly higher prolonged abstinence to 6 months as compared to placebo (17.6% vs. 6.0%; p=0.015; OR of 4.14; 95% CI, 1.32–13.02), although not significant for the Schedule 2, 200 μg dose (14.0% vs. 6.0%, p=0.054; OR=3.23; 95% CI, 0.98–10.67) or between placebo and Schedule 1 for each of the doses (p >0.84). The Schedule 2, 400 μg dose group also had the highest rates of prolonged abstinence to 12-months, significantly higher than placebo (15.7% vs. 6.0%, p=0.038; OR=3.44; 95% CI, 1.07–11.04), but not significant for the Schedule 2, 200 μg dose (14.0% vs. 6.0% for 200 (p=0.056; OR=3.21; 95% CI, 0.97–10.63) or between placebo and Schedule 1 for each dose (p >0.88).\n\nBODY.RESULTS.EFFECTS OF DOSE AND SCHEDULE.SAFETY:\nTable 2 lists the number of subjects experiencing local and systemic reactogenicity. Reactogenicity events were aggregated over all injections. Overall, ache and tenderness were the most commonly reported local events, with at least one report by nearly all subjects (86% to 98%) in each treatment group. Myalgia, malaise, and headache were the most commonly reported systemic events (64% to 88% of subjects). Swelling, heat, burning, erythema, and nausea were reported by about half the subjects. Fever and vomiting were less common (4% to 16%). A total of 1184 treatment-emergent adverse events (AEs), predominantly rated mild or moderate, were reported by 266 of 301 subjects; 87.1% of 3'AmNic-rEPA recipients and 91.0% of placebo recipients. On average, 3.7 and 4.3 events were observed per person in the vaccinated and placebo groups, respectively, including subjects with no events. The distribution of 161 physician-determined treatment-related AEs, according to severity and relationship to treatment, was similar for the 3'AmNic-rEPA and placebo arms. Seven 3'AmNic-rEPA recipients (3.5%) and 2 subjects in the placebo arm (2.0%) withdrew from the study due to adverse events. 18 Serious AEs were reported: 8 events in the 3'AmNic-rEPA treatment groups among 7 subjects (3.5% of the 3'AmNic-rEPA recipients) and 10 events in the placebo group among 5 subjects (5.0% of the placebo recipients). Only one of these SAEs (anaphylactic reaction in a 3'AmNic-rEPA 400 μg/Schedule 2 recipient) was considered by the Investigator to be treatment related. This subject, who had a history of urticaria to penicillin and seasonal allergies, experienced difficulty breathing, throat tightness, facial erythema and urticaria 70 minutes after the initial vaccination. The subject was treated with a single injection of subcutaneous epinephrine and diphenhydramine, which resolved the symptoms. Herpes Zoster was reported in 6 subjects; one occurring within 3–5 hours after the first vaccination which would be impossible to link to vaccination. Of the remaining cases, 4 (2%) occurred in 3'AmNic-rEPA recipients; while 1 occurred in placebo (1%). In contrast, the related herpes virus, simplex infection was reported in 4 3'AmNic-rEPA recipients (1%) and 3 placebo recipients (2%).\n\nBODY.DISCUSSION:\nResults demonstrated the proof-of-concept that smokers who achieved higher anti-nicotine Ab concentrations were more likely to quit and remain abstinent from smoking. The high Ab group demonstrated the highest abstinence rates independent of the time period of ascertainment. Similarly, in a separate study conducted by Cornuz and coworkers [13] testing a different nicotine vaccine (nicotine derivative conjugated to a virus-like particle derived from bacteriophage Qβ), post hoc analysis showed subjects stratified to the highest Ab group had a significantly higher quit rate than placebo. However, unlike the current study, which used the intent-to-treat (ITT) population to establish proof of concept, the reported finding by Cornuz et al. [13] was observed after eliminating about a third of the subjects who used nicotine replacement therapies during the course of the study or who had incomplete Ab titer values. In the present 3'AmNic study, subjects in the high Ab group had observed odds ratios of 4.4 (95% CI 1.5–12.7) and 3.9 (95% CI 1.3–11.2) for prolonged abstinence rates to 6 and 12 months versus placebo. Although no direct comparisons can be made, these odds ratios are not unlike ones observed in the meta-analyses conducted for the U.S. Clinical Practice Guideline where the reported odds-ratios ranged from 1.5 (95% CI 1.2–1.7) for nicotine gum to 3.1 (95% CI 2.5–3.8) for varenicline at 6 months post-quit [3]. If current study findings are confirmed in larger studies, imunotherapeutics are likely to emerge as an important aid to smoking cessation. No significant compensatory smoking in response to the presence of anti-nicotine antibodies, as determined by the number of cigarettes smoked per day, carbon monoxide levels or cotinine levels, was observed in this study. This result is consistent with observations from other studies [12, 13]. In the current study, subjects in the high Ab group who did not abstain smoked significantly lower number of cigarettes (median reduction of ~5 cigarettes/day) and experienced lower cotinine levels (~20%) than placebo subjects along with no differences observed between the low Ab and placebo groups. However, a small number of individual subjects (N=15/301) smoked more than 2-fold the number of cigarettes at baseline with no significant differences between the active treatment and placebo groups. A major challenge for immunotherapeutics is to stimulate high Ab levels in the vast majority of smokers trying to quit. Vaccine dose and frequency of vaccination have an impact on the Ab levels attained. The 5 injection/400 μg dose was associated with the highest Ab response, although not statistically significant, possibly due to the small sample size. Importantly, this dose and schedule demonstrated statistically significant improved 44-week CAR compared to placebo. Because the 4 injection/400 μg dose was not associated with higher abstinence rates, this result demonstrates that consideration of both dose and schedule of injection are critical. In an independent, follow-up immunogenicity study to examine if peak Ab levels could be elevated, a total of 74 subjects received 6 injections of 400 μg 3'AmNic-rEPA at week 0, 4, 8, 12, 16, and 26. More than 80% of subjects receiving the 6-dose immunization regimen exceeded a target level of Ab (25 μg/ml) by week 14. In contrast, only 50% of subjects receiving 5 injections of the 400 μg dose achieved this level by week 14 in the current study and, only 7% of subjects attained this level by the target quit date. This finding suggests that more frequent injections and a later quit date may increase treatment success. In general, 3'AmNic-rEPA was well-tolerated. The frequencies of local and systemic vaccine reactogenicity events were similar in the vaccine and placebo groups and similar to licensed adult vaccines containing Alum adjuvant [16]. The slight increase in cases of herpes zoster observed in this study may be spurious, but continued monitoring is necessary to determine if a causal relationship exists. The occurrence of a single anaphylactic reaction would suggest a need for continued monitoring and follow-up, even considering the subjects' history of prior drug allergy to penicillin.1 In summary, results from this study support the concept that high anti-nicotine Ab levels are associated with higher rates of abstinence. These findings suggest that vaccines attaining high Ab levels by the target quit date may be more effective. Other future strategies may include examining additional ways to increase Ab levels across all individuals. Nonetheless, this study demonstrates that 3'AmNic-rEPA has significant potential as a smoking cessation, and perhaps a relapse prevention aid.\n\nBODY.METHODS.STUDY POPULATION:\nSmokers were recruited via advertisement across nine geographically diverse U.S. sites. Interested subjects were screened over the telephone and more extensively screened at the first Screening Visit. Subject informed consent was obtained prior to screening. Subjects were 18 years of age or older, smoked ≥15 cigarettes/day, had exhaled CO ≥10 ppm; wanted to quit smoking, and were in good general physical and mental health. Exclusion criteria included recent use of any medications or drugs that might interfere with immune response, interact with the vaccine, and pharmacotherapies or other treatments for smoking cessation. For females, a negative urine pregnancy test at enrollment and active use of acceptable birth control or documentation of surgical sterility was required.\n\nBODY.METHODS.STUDY DESIGN:\nThis Phase 2 study was a randomized double-blind, placebo-controlled, parallel-arm trial design (see Figure 1). Four treatment groups varied the dose and/or schedule of intramuscular vaccination: 200 or 400 μg of 3'AmNic-rEPA or placebo according to Schedule 1 (weeks 0, 6, 12, and 26) or Schedule 2 (weeks 0, 4, 8, 16, and 26). Subjects (N=150) were first randomized within Schedule 1 groups in a 1:1:1 ratio (200 μg: 400 μg: placebo) and then 151 subjects randomized within Schedule 2 groups in the same ratio. The TQD was 1 week after the second injection (end of week 7 for Schedule 1 and end of week 5 for Schedule 2). If the subjects relapsed (seven consecutive days of smoking) after the quit date, a second quit date coinciding with a future clinic visit was allowed between time of relapse and week 18. Cessation counseling (based on the USDHHS Clinical Practice Guidelines [17]) for the first quit attempt involved 5 standardized face-to-face sessions (≤10 minutes) and for the second quit attempt, a face-to-face session plus 2 post-quit telephone counseling sessions. Subjects were followed for 52 weeks after randomization and the first injection on Day 0 for a total of 21 visits. During the injection day, subjects remained at the study site for 30 to 60 minutes for observation and attended a visit 24 hours after each injection to assess side effects. Otherwise, visits ranged from weekly to bi-weekly and were less frequent after later injections. Subjects who failed to quit smoking on their TQDs were encouraged to remain in the study and continue to attempt to achieve abstinence. Subjects who terminated from the study were not replaced and presumed to be smokers. Institutional Review Board approval was obtained from all institutions involved in the study. A Data and Safety Monitoring Board (DSMB) met four times during the study.\n\nBODY.METHODS.INVESTIGATIONAL PRODUCT:\nThe active investigational product was purified 3'-aminomethylnicotine conjugated to P. aeruginosa r-exoprotein A. For the 200 and 400 μg/mL dose, each single-use syringe contained 3'-aminomethylnicotine conjugated to 200 or 400 μg rEPA, respectively, adsorbed to 1.2 mg aluminum hydroxide adjuvant (Alhydrogel 85) in 1 mL phosphate buffered saline (0.15 M NaCl, 0.002 M NaPO4, pH 7.2, 0.01% polysorbate 80; PBS). For the placebo dose, PBS with 1.2 mg Alhydrogel 85 was included in a 1 mL single-use syringe.\n\nBODY.METHODS.MEASURES:\nCigarette use was recorded daily on an electronic diary for 182 days, and then weekly for the remainder of the study. Exhaled CO and urine cotinine were measured at each study visit, except for visits within 24 hours of vaccination. Questionnaires were collected via electronic diary: Fagerström Test for Nicotine Dependence [18] (administered days 0, 210, 364), Minnesota Nicotine Withdrawal Scale [MNWS, 19] (administered weekly until Month 6) and data on other tobacco usage. Sera were collected for immunogenicity measurements at 16–17 time points (schedule-dependent) from baseline to week 52. Anti-nicotine antibody concentrations were measured using ELISA (Enzyme-linked Immunosorbent Assay) [12]. Subjects recorded standard local and systemic reactogenicity events for 7 days after each injection. All reactogenicity events were followed until resolution or study completion. Treatment emergent AEs were recorded until 4 weeks after the last dose, with the exception of SAEs, which were collected through week 52. Subjects were also periodically monitored at clinic visits for vital signs, weight, hematology, chemistry and urinalysis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Intent-to-Treat (ITT) population was used for evaluation of efficacy, safety and immunogenicity. The ITT population was defined as all subjects who were randomized to treatment. The primary endpoint was continuous smoking abstinence for a total of 8 weeks measured from the beginning of week 19 to the end of week 26 (determined from subject diaries confirmed by exhaled CO ≤8 ppm). The analysis for proof-of-concept stratified the active-treatment recipients into high (top 30% AUC from weeks 0 to 26) and low (bottom 70% AUC from weeks 0 to 26) Ab responders, regardless of treatment group. An a priori decision was made to establish the antibody cutoff between 50% and 25%. Top 30% by AUC group was selected as the largest group of high antibody responders between the 25% and 50% cutoffs that demonstrated statistical significance as compared to placebo. Smoking outcomes were compared between subjects with high Ab and pooled placebo recipients using logistic regression. Secondary aims of this study were to assess a) 7-day point prevalence abstinence at various times, CARs during 52 weeks and time to sustained abstinence defined as attaining 8 weeks of continuous abstinence at anytime prior to week 46 and maintaining continuous abstinence through 52 weeks; b) impact on compensatory smoking amongst non-abstainers; c) withdrawal symptoms and d) immunogenicity, efficacy and safety of administration of either 4 or 5 of the 200 and 400 μg doses. Secondary smoking cessation analyses used logistic regression for binary outcomes and Cox proportional hazards regression models and log-rank tests for time-to-sustained abstinence analyses. Mixed-effects repeated-measures analyses of the number of cigarettes smoked, CO and cotinine adjusted for baseline were utilized in assessing compensatory smoking among non-abstainers or in assessing withdrawal symptoms. Anti-nicotine Ab responses were summarized as GMC with 95% confidence intervals. Safety was assessed throughout this study primarily using reactogenicity and adverse events (AEs). Reactogenicity data for 7 days after each injection were tabulated, and the proportions of subjects with any post-vaccination reactogenicity, aggregated over all injections, among the five treatment groups were compared using the Generalized Cochran-Mantel-Haenszel test. When a subject dropped out of the study, they were assigned presumed to be smokers. Otherwise, all missing diary data related to cigarette use were imputed utilizing the Last Observation Carried Forward principle. No imputation was carried out for CO level. The missing serology data were imputed by first defining a set of injection windows for each schedule. The missing serology was imputed by using the next available measured serology in its corresponding window; if the next value was not available the value of the nearest previous time point in that window was used. The AUC for anti-nicotine Ab was calculated based on imputed data. For Ab values one week after the target quit date, the value at week 9 was used for Schedule 1 as sera were not collected at week 8.\n\n**Question:** Compared to placebo what was the result of low Ab group on Prolonged abstinence from smoking rates to 12 months?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
93
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Routine resite of peripheral intravenous devices every 3 days did not reduce complications compared with clinically indicated resite: a randomised controlled trial\n\n ABSTRACT.BACKGROUND:\nPeripheral intravenous device (IVD) complications were traditionally thought to be reduced by limiting dwell time. Current recommendations are to resite IVDs by 96 hours with the exception of children and patients with poor veins. Recent evidence suggests routine resite is unnecessary, at least if devices are inserted by a specialised IV team. The aim of this study was to compare the impact of peripheral IVD 'routine resite' with 'removal on clinical indication' on IVD complications in a general hospital without an IV team.\n\nABSTRACT.METHODS:\nA randomised, controlled trial was conducted in a regional teaching hospital. After ethics approval, 362 patients (603 IVDs) were randomised to have IVDs replaced on clinical indication (185 patients) or routine change every 3 days (177 patients). IVDs were inserted and managed by the general hospital medical and nursing staff; there was no IV team. The primary endpoint was a composite of IVD complications: phlebitis, infiltration, occlusion, accidental removal, local infection, and device-related bloodstream infection.\n\nABSTRACT.RESULTS:\nIVD complication rates were 68 per 1,000 IVD days (clinically indicated) and 66 per 1,000 IVD days (routine replacement) (P = 0.86; HR 1.03; 95% CI, 0.74-1.43). Time to first complication per patient did not differ between groups (KM with log rank, P = 0.53). There were no local infections or IVD-related bloodstream infections in either group. IV therapy duration did not differ between groups (P = 0.22), but more (P = 0.004) IVDs were placed per patient in the routine replacement (mean, 1.8) than the clinical indication group (mean, 1.5), with significantly higher hospital costs per patient (P < 0.001).\n\nABSTRACT.CONCLUSIONS:\nResite on clinical indication would allow one in two patients to have a single cannula per course of IV treatment, as opposed to one in five patients managed with routine resite; overall complication rates appear similar. Clinically indicated resite would achieve savings in equipment, staff time and patient discomfort. There is growing evidence to support the extended use of peripheral IVDs with removal only on clinical indication.\n\nABSTRACT.REGISTRATION NUMBER:\nAustralian New Zealand Clinical Trials Registry (ANZCTR) Number ACTRN12608000421336.\n\nBODY.BACKGROUND:\nPeripheral intravenous device (IVD) insertion is the most commonly performed invasive procedure in hospitalised patients, with an estimated 150 million peripheral intravenous devices placed each year in North America alone [1]. IVDs are vital for delivery of hydration, medicines and nutrition but are not without complications. Serious adverse outcomes are fortunately rare, with IVD-related bloodstream infection reported in a recent meta-analysis of 110 studies to occur in 0.1% of devices and 0.5 per 1,000 device days [2]. IVD treatment is more frequently interrupted by phlebitis, an irritation of the vein characterised by pain, tenderness on palpation, erythema, warmth, swelling, induration or palpable cord (thrombosis) of the cannulated vein; diagnostic algorithms usually require two or more of these conditions [3-5]. Phlebitis is in almost all cases a biochemical reaction to the mechanical irritation by the presence of the IVD and associated infusate [3], although phlebitis symptoms such as erythema may be misperceived as indicative of an infection. In fact, there is not a high correlation between phlebitis and device infection, and the Centers for Disease Control (CDC) states that infection is rarely associated with peripheral, as opposed to central, venous devices [3,6,7]. Fluid infiltration or 'tissuing' of devices is another common IVD complication which may also reflect the inflammatory (phlebitic) response of the vein, rather than simple misplacement of the device tip [8]. Early cohort studies noted an association between increased device time in situ and phlebitis [9,10]. This association was responded to with policies for routine device removal. Recommended timelines for routine resite have been extended over the past three decades from 24, to 48, then to 72 hours. Currently, 72- to 96-hour resite is recommended to reduce phlebitis by the CDC's 2002 Guidelines for the Prevention of Intravascular Device Infection, with the exemption that this is not required in children or those with poor veins [7]. Such policies cause increased workload in hospitals, where the task of removing and replacing well-functioning IVDs generally falls to busy nursing and junior medical staff. In addition, few patients welcome the prospect of additional venipuncture. Despite the general clinical acceptance of routine IVD replacement as a phlebitis and infection prevention measure, it has not been supported by recent data. It may be that the risk of complications during the entire IVD treatment episode is similar, regardless of whether multiple short-dwell or fewer longer-dwell IVDs are used over this time. Three small (n = 47-60) randomised, controlled trials (RCTs) suggested routine replacement at 12, 24 or 48 hours may reduce phlebitis compared to resite on clinical indication, although a systematic review for the Swedish Council on Technology Assessment in Healthcare assessed these as low- to medium-quality studies providing 'limited scientific evidence' [11-14]. More recently, two well-conducted RCTs found no evidence of effect when comparing IVD replacement every 3 days with replacement only on clinical indication for medical and surgical inpatients [15,16]. The largest of these studies reported findings from 755 general medical and surgical patients with 1,428 IVDs and found a 5% difference in combined phlebitis and infiltration rates per patient (38% clinically indicated resite, 33% routine resite), suggesting a potential small clinical benefit of 3-day resite [15]. However, this difference was not statistically significant (RR 1.15; 95% CI, 0.95-1.40) and disappeared when overall cannulation time was considered (59.8/1,000 IVD days clinically indicated resite, 60.9/1,000 IVD days routine resite; RR 0.98; 95% CI 0.78-1.24) [15]. In addition, no clinically important or statistically significant differences were observed in the secondary endpoints of phlebitis, infiltration, occlusion, local infection or suspected bloodstream infection rates between study groups [15]. Another recent RCT in the 'hospital in the home' community setting also found no important clinical or statistically significant difference in phlebitis, occlusion or bloodstream infection rates in 316 patients when resite every 3 days was compared with clinically indicated resite [17]. A 2010 Cochrane Collaboration review concluded there was 'no conclusive evidence of benefit' of routine IVD resite and suggested organisations could consider adopting a resite on clinical indication policy [18]. There is growing evidence that routine IVD replacement may be ineffective, although caution has been urged in light of the large number (74% in both groups in the largest study to date) of reported devices inserted by a specialised IV team, a factor known to reduce complications [19]. Device insertion (and reinsertion) is unpleasant for patients, requires skilled and available clinical staff, and has associated costs for the health sector. If replacement only on clinical indication is safe and effective, this would have important benefits for patients and the health system. We report a RCT of 3-day routine IVD resite versus clinically indicated replacement in a medical-surgical hospital where IVDs were inserted by the general medical and nursing staff; the insitution did not have a specialised IV service.\n\nBODY.METHODS.AIM:\nThe aim of the study was to compare the impact of 3-day routine resite, with clinically indicated resite, on peripheral IVD complications.\n\nBODY.METHODS.DESIGN:\nOpen (nonblinded), parallel group RCT.\n\nBODY.METHODS.ETHICS:\nThe study was approved by the Tasmanian State Human Research Ethics Committee. Written informed consent was obtained prospectively from all participants.\n\nBODY.METHODS.SETTING AND SAMPLE:\nThe study was undertaken at a large regional teaching hospital in Australia which functions as the tertiary referral centre for the northern half of the State of Tasmania. The hospital has more than 32,000 separations per annum, with a spectrum of medical and surgical specialties. Eligible patients were at least 18 years of age and scheduled or expected to have a peripheral IVD indwelling for at least 4 days, and they gave written informed consent. Exclusion criteria were immunosuppression, current bloodstream infection or an IVD already in situ for >48 hours. IVDs were inserted and cared for by the general nursing and medical staff; there was no special IV team or service.\n\nBODY.METHODS.SAMPLE SIZE:\nSample size calculations were performed using PASS 2008 (Version 8.0.8; Kaysville, UT) to detect a change in rates by 30% (from 36% to 25%, two-tailed α = 0.05, 90% power) on the basis of the complication rates of routinely resited IVs in a previous study [16]. Although this indicated that n = 378 per group (total 756) were required, the study was ceased early (total n = 606 IVs) because all investigators left the employment of the institution. Consequently, study power was reduced, but remained over 80% (required minimum n = 282 per group).\n\nBODY.METHODS.RECRUITMENT:\nAll adult patients admitted to the inpatient acute medical and surgical wards of the study hospital were screened by a full-time research nurse. This excluded paediatric, day-surgery, mental health, obstetric, critical care and dialysis units.\n\nBODY.METHODS.STUDY PROCEDURES:\nPatients were randomly assigned (computer generated) in a 1:1 allocation ratio to either the 'routine replacement' (control) or 'clinically indicated replacement' (intervention) group. Assignment was concealed until randomisation by use of a telephone service. A tag was placed on the insertion site indicating the study group. All devices for the patient were managed as per randomised group. The intervention group did not have their IVD resited unless clinically indicated. This decision was made by the treating clinician (not the investigators), who ordered IVD resite if the device failed or phlebitis occurred and ongoing IV treatment was required. The control group had a new device relocated to a different site by the general medical or nursing staff every 3 days. Control devices could also be removed at any stage by the clinical staff if they were not required or if complications occurred. Clinical nursing and medical staff undertook insertion and follow-up care of all IVDs as per the CDC Guidelines [7]. Laboratory staff undertaking microbiological culture assessments were blinded to the study group. Due to the nature of the intervention, patients, research, and clinical staff were unable to be blinded. However, the investigators had no involvement in assessing or documenting complications. IVDs were assessed by the clinical nursing staff on each nursing shift for complications as part of standard clinical practice in the hospital. Times and dates of device insertion and removal were recorded along with the reason for device removal and any protocol deviations. A full-time research nurse collected data from the hospital chart and sought clarification from patients and clinical staff if necessary. Microbiological investigations (device tip, blood cultures and site cultures) were performed by the clinical staff on clinical suspicion of infection by the treating clinician. Demographic and clinical data were collected on age, sex, diagnosis at hospital admission, phlebitis risk based on Tagar et al.'s classification (low/medium/high risk) [20], past history of phlebitis, any comorbidities requiring active medical treatment (e.g., type 2 diabetes or congestive heart failure), haemoglobin, concurrent infection at other sites, antibiotic therapy, type of surgery, type of infusate and any additives (and their level of irritability), vein and skin quality assessment, size of device, insertion site, health professional inserting the device, and setting for insertion, presence of other vascular devices, wound drains and urinary catheters. Vein quality was assessed as good (vein easy to visualise and easy to palpate with tourniquet on), fair (not easily visible but can palpate with tourniquet), or poor (veins small, scarred or difficult to palpate with tourniquet; may require heat pack to aid vasodilation). Skin quality was assessed as good (healthy, well hydrated, elastic), fair (mildly dehydrated, reduced elasticity), or poor (papery, dehydrated, or reduced elasticity).\n\nBODY.METHODS.ANALYTIC APPROACH:\nThe primary outcome was a composite measure of any complication causing unplanned cannula removal prior to completion of IV treatment. The composite included phlebitis, infiltration, occlusion, accidental removal, local infection, and IV device-related bloodstream infection (IVD-BSI). These were also analysed individually as secondary endpoints. A composite measure was chosen due to the low rates of these conditions individually and to the assumption that they are comparable measures of 'infusion failure'; that is, the device can no longer be used to deliver treatment. This approach has been used in previous studies on the topic [15-17]. Phlebitis was defined as two or more of pain, erythema, purulence, streak formation, or a palpable venous cord [3]. Local infection IVD-BSI (bacteremia/fungemia with at least one positive blood culture obtained from a peripheral vein, clinical manifestations of infection, and no apparent source for the bloodstream infection (BSI) except the device with or without positive tip or entry site swab culture) were defined using CDC criteria [7]. Other secondary outcomes were time in situ (hours of catheterisation from insertion to removal, both per patient and per device) [7]; IVDs per patient (number of peripheral devices inserted to complete the course of treatment) [7]; costs (calculations based on 20 minutes nursing or medical insertion time at relevant rates [15], plus the cost of the required equipment (cannula, insertion pack including dressing and solution, gloves, saline, syringe, extension tubing and starter pack for all plus fluid bag, tubing and secondary tubing for medication administration for those patients requiring this) for insertions, nursing time and equipment to routinely remove IVDs that were otherwise functional, and the costs of treating any complications that occurred (e.g., IVD-BSI). Cost calculations were undertaken from the viewpoint of the hospital using negotiated wage costs and purchasing agreements for government hospitals in the State of Tasmania. Costs would be similar for other Australian hospitals. All randomised patients were analysed by intention to treat. Each patient was randomised once and could have multiple IVDs, with subsequent IVD resites managed as per the randomised group. Relative incidence complication rates per 1,000 IVD days and 95% confidence intervals were calculated to summarise the impact of clinically indicated replacement relative to 3-day replacement. Kaplan-Meier survival curves were drawn to compare time to first IVD complication between patients in the two study groups. To assess for any potential impact of protocol violations, a per protocol analysis was also undertaken. All clinical and demographic variables were subjected to univariate testing against the primary endpoint to guide selection of possible covariates for the multivariable model. Cox proportional hazards regression modelling was used to examine the impact of age, gender, oncology status, number of comorbidities (nil, one, two, or more than two), IV gauge, site, vein quality, skin quality, oral antibiotics, IV antibiotics, wound drain, inserter occupation, initial versus subsequent IVDs, phlebitis in a preceeding IVD, haemoglobin level, parenteral nutrition, continuous versus intermittent infusion, patient risk category and study group on the outcome of time to complication events using an additive model [3,5,7,20-25]. In addition, to adjust for any inherent correlations or codependencies in the failure times of IVDs (i.e., same patient multiple failure-time data) within the Cox model, we also used the Prentice-Williams-Peterson conditional risk-set method [26]. The Mann-Whitney test was used to compare various secondary outcomes between study groups. Cost differences were calculated using arithmetic means and the t-test [27]. P values <0.05 were considered significant. All statistical data were entered and analysed using SPSS (Version 15.0; Chicago, IL) and Stata (Version 8.2; College Station, TX).\n\nBODY.RESULTS.SAMPLE:\nOver a 10-month period, 1,954 patients were screened for eligibility. Of these, 788 were eligible, with 362 (46%) recruited into the study. The most frequent exclusion criterion was altered mental state that precluded consideration of consent as assessed by the research nurse. Altered mental state was generally related to older medical patients and the immediate postoperative phase for surgical patients. Reasons for exclusion are shown in Figure 1. The 362 patients were randomised into either the routine change group (n = 177 participants, 323 devices) or the clinically indicated replacement group (n = 185 participants, 280 devices). In total 50,173 IVD hours were studied (routine change group 23,288 hours, clinically indicated group 26,885 hours). More patients in the routine change group had an active infection (53% vs. 44%) and were receiving IV antibiotics (73% vs. 64%). However, as shown in Tables 1 and 2, the two groups were generally comparable at baseline for patient- and cannula-related factors. Figure 1Participant flowchart. Table 1 Baseline characteristics of study participants Variable 3-Day Routine Change Group (n = 177) Clinically Indicated Change Group (n = 185) Sex - Male, n (%) 96 (54%) 103 (56%) Age, mean (SD) 65.1 (17.3) 62.7 (15.4) Reason for admission, n (%) • Gastrointestinal 46 (26%) 52 (28%) • Respiratory 37 (21%) 34 (18%) • Oncology 21 (12%) 21 (11%) • Orthopaedic 16 (9%) 22 (12%) • Cardiac 10 (6%) 10 (5%) • Neurological 9 (5%) 10 (5%) • Vascular 7 (4%) 7 (4%) • Renal 16 (9%) 9 (5%) • Other 15 (8%) 20 (11%) Number of comorbidities, n (%) • None 6 (3.4%) 10 (5.4%) • 1 22 (12.4%) 21 (11.4%) • 2 44 (24.9%) 54 (29.2%) • >2 105 (59.3%) 100 (54.1%) Type of surgery, n (%) • Nil 124 (70%) 132 (71%) • Gastrointestinal 22 (12%) 22 (12%) • Orthopaedic 8 (5%) 15 (8%) • Other 23 (13%) 16 (9%) Most recent Hb - Mean (SD) 124.8 (23.2) 126.2 (21.9) Table 2 Baseline infusion-related characteristics of study devices Variable 3-Day Routine Change Group (n = 323) , n (%) Clinically Indicated Change Group (n = 280) , n (%) IV cannula gauge • 22 43 (13%) 46 (16%) • 20 241 (75%) 201 (72%) • 18 27 (12%) 30 (11%) • Other 2 (1%) 3 (1%) Vein assessment • Good 143 (44.3%) 107 (38.2%) • Fair 160 (49.5%) 144 (51.4%) • Poor 20 (6.2%) 29 (10.4%) Skin integrity • Good 138 (43%) 104 (37%) • Fair 178 (55%) 164 (59%) • Poor 7 (2%) 12 (4%) Past history of phlebitis 2 (0.6%) 4 (1.4%) Insertion site • Hand 206 (64%) 187 (67%) • Forearm 59 (18%) 49 (18%) • Cubital fossa 51 (16%) 38 (14%) • Other 7 (2%) 6 (2%) Receiving infusate 241 (75%) 215 (77%) pH of infusate - Mean (SD) 6.0 (0.5) 6.0 (0.4) Receiving oral antibiotics 24 (7%) 24 (9%) Receiving IV antibiotics 236 (73%) 176 (63%) pH of IV antibiotics - Mean (SD) 6.9 (1.1) 6.9 (1.3) Receiving other IV meds 190 (59%) 179 (64%) pH of other IV meds - Mean (SD) 5.6 (2.6) 6.0 (2.8) Wound drain 50 (16%) 40 (14%) Urinary catheter 55 (17%) 41 (15%) Other vascular device 26 (8%) 24 (9%) Inserted by • Junior doctor 232 (72%) 207 (74%) • Registered Nurse 67 (21%) 57 (20%) • Senior doctor 24 (7%) 16 (6%) Where inserted • Ward 216 (67%) 188 (67%) • Emergency department 74 (23%) 70 (25%) • Other 33 (10%) 22 (8%) Current infection (site) • Respiratory 97 (30%) 49 (18%) • Urinary 24 (7%) 19 (7%) • Wound 16 (5%) 15 (5%) • Other 29 (9%) 37 (13%) • >1 site 5 (2%) 4 (1%) N/A (none) 152 (47%) 156 (56%) Risk of phlebitis (Tagar scale) • Low 174 (54%) 144 (51%) • Medium 149 (46%) 136 (49%) • High 0 0 \n\nBODY.RESULTS.EFFECT OF INTERVENTION ON PRIMARY OUTCOME:\nOutcome data were available for all patients. Table 3 shows the rates of primary and secondary outcomes. Differences in complication rates between groups were not significantly different (routine replacement 66.0 per 1,000 IVD days; clinical replacement 67.8 per 1,000 IVD days; HR 1.03; 95% CI, 0.74-1.43; P = 0.86). As shown in Figure 2, the time to first complication per patient was also not significantly different between groups (Kaplan Meier [KM] with log rank P = 0.53). On crude rate per IVD, the catheters replaced on clinical indication had higher complication rates (110/280 or 39% vs. 91/323 or 28%; P = 0.004). However, total complication rates per patient (to deliver the course of IV therapy) were not significantly different (P = 0.39) between clinically indicated (76/185, 41%) and routine resite patients (64/177, 36%). Table 3 Effect of intervention on primary and secondary endpoints Outcomes 3-Day Routine Change Group (n = 177) Clinically Indicated Change Group (n = 185) RR (95% CI) , P Value Primary: IVD complications per patient, n (%) 64 (36%) 76 (41%) RR 1.14 (0.88, 1.47), p = 0.39 IVD complications per 1000 IVD days 66.0 (95% CI 49.8, 82.1) 67.8 (95% CI 52.6, 83.1) HR 1.03 (0.74, 1.43), p = 0.86 Secondary: Phlebitis, n (%) 12 (7%) 18 (10%) RR 1.44 (0.71, 2.89), p = 0.34 Infiltration, n (%) 53 (30%) 61 (33%) RR 1.10 (0.81, 1.49), p = 0.57 Occlusion, n (%) 5 (3%) 4 (2%) RR 0.77 (0.21, 2.80), p = 0.75 Accidental removal, n (%) 11 (6%) 16 (9%) RR 1.39 (0.66, 2.92), p = 0.43 Local infection 0 0 - IVD-related BSI, n (%) 0 0 - IVD costs per patient, AUD$ mean (SD) $55.42 ($35.26) $43.35 ($26.78) Mean difference $12.07 (95%CI $5.57, $18.56), p < 0.001 Figure 2Kaplan-Meier survival curve of time to first intravenous device complication per patient (log rank, P = 0.53). Patient- and device-related variables considered in the multivariable model were older age, number of comorbidities (nil, one, two or more than two), smaller cannula size, poor skin or vein integrity, IV antibiotics, insertion by medical staff and study group. None of these were found to be statistically significant. The final Cox proportional hazards model after adjusting for time found study group was not a significant factor (HR 1.02; 95% CI, 0.77-1.36; P = 0.89). Variance-adjustment testing for potential multiple-failures per patient (cannula data) found no inconsistency in significant results compared to the main Cox model.\n\nBODY.RESULTS.PROTOCOL COMPLIANCE:\nCompliance with the study intervention was achieved in 78% (251 of 323) of routinely replaced IVDs removed at Day 3 and 100% (280 of 280) of IVDs resited on clinical indication. Noncompliance in the routine resite group was generally related to high staff workloads on that day or to staff's perception that the patient was soon to be discharged or have therapy completed, and so the IVD remained in situ beyond Day 3. A per protocol analysis was performed including only those patients in the routine replacement group whose IVDs were all actually removed by 96 hours (n = 251). This found no significant difference (KM with log rank P = 0.16) in the rate of complications between groups (routine replacement 92 per 1,000 IVD days vs. clinically indicated 68 per 1,000 IVD days).\n\nBODY.RESULTS.EFFECT OF INTERVENTION ON SECONDARY OUTCOMES:\nThere was no statistically significant difference in group outcomes for any phlebitis (P = 0.34), infiltration (P = 0.57), occlusion (P = 0.75), or accidental removal (P = 0.43). No cases of local infection or IVD-related bloodstream infection occurred in either group. For overall IVD therapy (total cannulation time per patient for all devices), clinically indicated devices had a median therapy of 120 hours (quartiles 86.5 and 172.5 hours), and those replaced routinely had median therapy of 113 hours (quartiles 72 and 172 hours) (P = 0.21). For individual IVD dwell times, the clinically indicated group devices had a median dwell time of 85 hours (quartiles 51 and 121 hours), and those replaced routinely had a median dwell time of 71 hours (quartiles 48 and 90 hours) (P < 0.001). The maximum IVD dwell time was 1,023 hours (43 days) in the clinical replacement group, and this cannula was still working well on removal for completion of therapy. The overall number of IVDs per patient was significantly less (P = 0.004) for those replaced on clinical indication (mean 1.5, SD 0.8, median 1, quartiles 1 and 2) than for those routinely replaced (mean 1.8, SD 1.1, median 1, quartiles 1 and 2). A total of 22% of patients in the routinely replaced group had three or more IVDs compared with 9% in the clinical indication group. A total of 82 (28%) IVDs in the routine replacement group were resited after 3 days despite functioning well and ongoing treatment being required. (The remainder removed at this time were no longer required and so not resited, or infusion failure had already occurred.) Mean hospital costs per patient for the course of IV therapy were significantly higher (P < 0.001) for those managed with routine resite (mean $55.42, SD $35.26) compared with resite on clinical indication (mean $43.35, SD $26.78).\n\nBODY.DISCUSSION:\nThe finding that 3-day routine resite was not an effective intervention was consistent across the intention-to-treat primary analysis and the per protocol analysis. There remained no effect when events were expressed per patient or per 1,000 IVD days. Neither composite nor individual complication rates differed between groups, and there were no cases of local or device-related bloodstream infection. It appears safe and practical to leave IVDs in situ as long as they are functioning well and are still needed for clinical treatments. All IVDs will fail eventually, but this study shows that artificially shortening the lifespan of individual catheters does not reduce the overall complication rates over the course of therapy. Our results indicate that the average duration of IV therapy is 5-6 days and that many catheters can remain complication-free for this period. If catheters are not routinely resited, the median dwell time would remain within the 72-96 hours recommended by the CDC, but about 10% would remain in situ for longer (in this study up to 43 days with no complications). Our data show that a policy of resite on clinical indication would mean that one of every two patients would need a single cannula to receive treatment, whereas a 3-day change policy would result in only one in five patients having this scenario, with the rest requiring multiple cannulations and therefore additional pain and inconvenience. The results are consistent with the findings of recent RCTs in both hospitals and the community that have found no benefit in routinely resiting IVDs every 3 days [15-17]. In these studies, many cannulae were inserted by an expert IVD team [15-17], which may have minimised complications in both groups [19]. Our study confirms and extends these findings into the general medical/surgical setting without an IV team where IVDs were inserted by a variety of nursing and medical staff. Data from this study were included in a 2010 Cochrane Collaboration systematic review and meta-analysis on the topic [18]. This review included six trials (n = 3,455) and reported no clinically important or statistically significant difference in catheter-related bloodstream infection or phlebitis between IVDs that were routinely resited (at 48-96 hours) or resited on clinical indication, yet there were significantly lower costs in the group resited on clinical indication [18]. The belief that routine resite of IVDs will prevent complications appears to stem from early observational studies that noted longer-dwelling IVDs had more complications than shorter-dwelling IVDs [9,10]. This is intuitively true, given, for example, that an IVD in situ for 6 days has twice the time exposure of risk than an IVD in situ for 3 days. However, this does not prove that two sequentially inserted IVDs (in the same patient), both used for 3 days, have a combined lower risk over time than the 6-day dwell device. Indeed, this and other recent trials strongly suggest that the risk for the patient over the 6-day period is similar. Well-designed cohort studies with modern catheter materials suggest that the daily risk of phlebitis is relatively stable after the first 24 or 48 hours [3,21,28-31]. The peak in phlebitis between 24 and 48 hours is likely associated with the time taken by the body to mount a biological response after the instigation of therapy; those most likely to develop phlebitis will do so at this time. The results support the extension of the use of peripheral IVDs beyond the 72-96 hours currently recommended by the CDC [7]. There is incongruity in the CDC recommendations; they recommend not to routinely resite IVDs in children or in those with limited venous access. If it is safe in these populations, it is unclear why it would be necessary to routinely resite adults or those with better veins. Higher-risk cannulae such as central venous devices are no longer recommended by the CDC for routine replacement, because trials showed this was not of benefit [7]. Our study also confirms that the CDC guidelines are not always complied with; one fifth of IVDs in the routine change group were not replaced by this time. However, the per protocol analysis showed that the intervention remained ineffective even for those who truly had their IVDs resited every 3 days. Limitations of the study included a 9% higher frequency of IV antibiotics and concurrent infection in the routine resite group. This may have put the group at higher risk due to vein irritation, or conversely it protected against bacterial entry. Neither variable was significant in the multivariable model. The unblinded study design was unavoidable, but also a limitation. Our use of clear outcome measures, a full-time research nurse and laboratory staff blinded to culture assessments should have reduced the risk for potential bias. Resource constraints prematurely ended recruitment, thus reducing the anticipated power of the study from 90% to 80%. Routine IVD resite involves pain for patients, staff procedural time, equipment costs and environmental waste. Contemporary evidence suggests the current policy for routine resite by 72-96 hours is ineffective and should be replaced with a 'resite on clinical indication' policy. It remains imperative that clinical staff monitor IVDs closely and that a daily review of the need for continuing therapy be made, with cessation as soon as possible; the only no-risk IVD is no IVD. Of the 4.3 million acute hospital admissions in Australia each year (excluding day cases), over half have IV treatment [15,32]. Conservatively, if even 2 million courses of IV therapy were managed with clinically indicated rather than routine resite, this would save the unnecessary insertion of approximately 660,000 IVDs and free 280,000 hours of staff insertion time. Assuming our costs are similar to those in other Australian hospitals, a change to resite on clinical indication would save approximately AUD$24 million nationally each year.\n\nBODY.CONCLUSIONS:\nAlthough larger, multisite trials are required, evidence to date suggests that routine resite of peripheral IVDs increases patient discomfort and healthcare costs, but does not reduce IVD complications as has traditionally been thought.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nCMR designed the study, applied for funding and drafted the manuscript. DM participated in study design and manuscript preparation, and managed the study. JM recruited patients, ensured protocol integrity and collected data. MRM undertook statistical analyses and assisted with drafting the manuscript. All authors read and approved the final manuscript.\n\nBODY.AUTHORS' INFORMATION:\nCMR, PhD is Professor, Research Centre for Clinical and Community Practice Innovation, Griffith University, Australia. DM holds a Masters by Research and is Senior Lecturer, University of Tasmania, Australia. JM is an RN and Nurse Unit Manager, Calvary Tasmania, Australia. MRM is a PhD qualified Biostatistician, Gippsland Medical School, Monash University, Australia.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1741-7015/8/53/prepub\n\n**Question:** Compared to staff inclination replacement what was the result of routine replacement on infiltration?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
192
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Ertapenem Once a Day Versus Piperacillin–Tazobactam Every\n6 Hours for Treatment of Acute Pelvic Infections:\nA Prospective, Multicenter, Randomized, Double-Blind Study\n\n ABSTRACT:\nObjective: To compare ertapenem therapy with piperacillin–tazobactam therapy for the management of acute\npelvic infections.\n Methods: In a multicenter, double-blind study, 412 women with acute pelvic infection were assigned to one of\ntwo strata, namely obstetric/postpartum infection or gynecologic/postoperative infection, and were then\nrandomized to ertapenem, 1 g once a day, or piperacillin–tazobactam, 3.375 g every 6 hours, both administered\nintravenously.\n Results: In total, 163 patients in the ertapenem group and 153 patients in the piperacillin–tazobactam group were\nclinically evaluable. The median duration of therapy was 4.0 days in both treatment groups. The most common\nsingle pathogen was Escherichia coli . At the primary efficacy endpoint 2–4 weeks post therapy, 93.9% of patients\nwho received ertapenem and 91.5% of those who received piperacillin–tazobactam were cured (95%\nconfidence interval for the difference, adjusting for strata, –4% to 8.8%), indicating that cure rates for both\ntreatment groups were equivalent. Cure rates for both treatment groups were also similar when compared by\nstratum and severity of infection. The frequency and severity of drug-related adverse events were generally\nsimilar in both groups.\n Conclusions: In this study, ertapenem was as effective as piperacillin–tazobactam for the treatment of acute pelvic\ninfection, was generally well tolerated, and had an overall safety profile similar to that of piperacillin–tazobactam.\n\n**Question:** Compared to Intravenous piperacillin–tazobactam, 3.375 g every 6 hours what was the result of Inravenous ertapenem, 1 g once a day on obstetric/postpartum infection or gynecologic/postoperative infection?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
110
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Liraglutide, a once-daily human GLP-1 analogue, added to a sulphonylurea over 26 weeks produces greater improvements in glycaemic and weight control compared with adding rosiglitazone or placebo in subjects with Type 2 diabetes (LEAD-1 SU)\n\n ABSTRACT.AIM:\nTo compare the effects of combining liraglutide (0.6, 1.2 or 1.8 mg/day) or rosiglitazone 4 mg/day (all n ≥ 228) or placebo (n = 114) with glimepiride (2–4 mg/day) on glycaemic control, body weight and safety in Type 2 diabetes.\n\nABSTRACT.METHODS:\nIn total, 1041 adults (mean ± sd), age 56 ± 10 years, weight 82 ± 17 kg and glycated haemoglobin (HbA1c) 8.4 ± 1.0% at 116 sites in 21 countries were stratified based on previous oral glucose-lowering mono : combination therapies (30 : 70%) to participate in a five-arm, 26-week, double-dummy, randomized study.\n\nABSTRACT.RESULTS:\nLiraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%) or rosiglitazone (−0.4%, P < 0.0001, baseline 8.4%) when added to glimepiride. Liraglutide 0.6 mg was less effective (−0.6%, baseline 8.4%). Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l). Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). Changes in body weight with liraglutide 1.8 mg (−0.2 kg, baseline 83.0 kg), 1.2 mg (+0.3 kg, baseline 80.0 kg) or placebo (−0.1 kg, baseline 81.9 kg) were less than with rosiglitazone (+2.1 kg, P < 0.0001, baseline 80.6 kg). Main adverse events for all treatments were minor hypoglycaemia (< 10%), nausea (< 11%), vomiting (< 5%) and diarrhoea (< 8%).\n\nABSTRACT.CONCLUSIONS:\nLiraglutide added to glimepiride was well tolerated and provided improved glycaemic control and favourable weight profile.\n\nBODY.INTRODUCTION:\nMost drugs that target Type 2 diabetes (T2D) also cause weight gain or hypoglycaemia, or both, with the risk increasing with combination therapy. Glucagon-like peptide-1 (GLP-1)-based therapies stimulate insulin secretion and reduce glucagon secretion only during hyperglycaemia. GLP-1 also slows gastric emptying and reduces appetite [1]. Although American Diabetes Association (ADA)/European Association for the Study of Diabetes (EASD) guidelines recommend lifestyle and metformin as initial therapy for T2D [2], sulphonylureas are used widely, particularly when metformin or thiazolidinediones are not tolerated. Glycaemic control eventually deteriorates with sulphonylureas while hypoglycaemia and weight gain are common [3]. Incretin therapy improves glycaemic control with low hypoglycaemic risk, while delayed gastric emptying and reduced appetite can reduce weight [1,4]. Liraglutide is a once-daily human GLP-1 analogue with 97% linear amino-acid sequence homology to human GLP-1 [5] and half-life of 13 h after subcutaneous administration that produces 24-h blood glucose control [6]. Liraglutide monotherapy for 14 weeks reduced glycated haemoglobin (HbA1c) by 1.7% and fasting plasma glucose (FPG) by 3.4 mmol/l without causing hypoglycaemia, along with weight loss (∼3 kg) compared with placebo [7]. Improvements in pancreatic B-cell function [7–9] and blood pressure [7], along with decreased glucagon secretion [7,10], also occurred. As part of the phase 3 programme [the Liraglutide Effect and Action in Diabetes (LEAD) programme] with liraglutide in > 4000 subjects with T2D as monotherapy or in combination therapy, this 26-week trial examined liraglutide plus glimepiride compared with either placebo or rosiglitazone added to glimepiride on glycaemic control and body weight.\n\nBODY.SUBJECTS AND METHODS.STUDY PARTICIPANTS:\nInclusion criteria: T2D treated with oral glucose-lowering agents (OGLAs) for ≥ 3 months; 18–80 years of age; HbA1c 7.0–11.0% (previous OGLA monotherapy) or 7.0–10.0% (previous OGLA combination therapy); body mass index (BMI) ≤ 45.0 kg/m2. Exclusion criteria: used insulin within 3 months, impaired liver or renal function, uncontrolled hypertension (≥ 180/100 mmHg), cancer or used any drugs apart from OGLAs likely to affect glucose concentrations. Subjects provided written informed consent. The study was conducted in accordance with good clinical practice guidelines and approved by independent ethics committees.\n\nBODY.SUBJECTS AND METHODS.STUDY DESIGN:\nThe study was a 26-week, double-blind, double-dummy, randomized, active-control, five-armed parallel (116 sites in 21 countries, primarily Europe and Asia) trial enrolling 1041 subjects (1–37 subjects per centre), all receiving glimepiride (2–4 mg/day) in combination with (Fig. 1): FIGURE 1Overview of trial design and treatment arms. one of three liraglutide doses [0.6, 1.2 or 1.8 mg, injected subcutaneously (Novo Nordisk, Bagsvaerd, Denmark) and rosiglitazone placebo];liraglutide placebo and rosiglitazone placebo;liraglutide placebo and rosiglitazone 4 mg/day (rosiglitazone; AvandiaTM; GlaxoSmithKline, London, UK). The doses of rosiglitazone and glimepiride used were determined by the highest doses approved in all participating counties. After discontinuing previous OGLAs except glimepiride, separate 2-week titration and maintenance periods with glimepiride (open-label) preceded randomization (Fig. 1). Subjects were stratified according to previous treatment (monotherapy or combination therapy). After randomization, 2-week treatment titration and 24-week treatment (maintenance) phases (Fig. 1) were completed. Liraglutide was up-titrated weekly in 0.6-mg increments until allocated doses were reached. Glimepiride could be adjusted between 2 and 4 mg/day in case of hypoglycaemia or other adverse events (AEs), while other drug doses were fixed. Liraglutide (active and placebo) was supplied in 3-ml pre-filled pens with 31G needles (Novo Nordisk). Subjects were encouraged to inject liraglutide into the upper arm, thigh or abdomen at the same time each day. Rosiglitazone and glimepiride were taken in the morning or with the first meal.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.EFFICACY:\nThe primary endpoint was change from baseline HbA1c after 26 weeks of treatment. Secondary endpoints included: percentages of subjects reaching HbA1c (< 7.0%, ≤ 6.5%), FPG (5.0 to ≤ 7.2 mmol/l) and postprandial plasma glucose (PPG; 10.0 mmol/l) targets [11–13]; changes in body weight, FPG, mean PPG, indices of pancreatic B-cell function [pro-insulin : insulin ratio and homeostasis model assessment (HOMA)-B], HOMA-insulin resistance (HOMA-IR) and blood pressure (BP). HbA1c was measured centrally (MDS Pharma Services, King of Prussia, PA, USA) by high performance liquid chromatography while plasma glucose (PG) was self-measured using MediSense® glucose meters (Abbott Diagnostics Inc., Abbott Park, IL, USA). Insulin and C-peptide were measured by chemiluminescence, proinsulin by ELISA, while glucagon was measured in aprotinin-treated plasma by radioimmunoassay. The proinsulin : insulin ratio was calculated from fasting insulin and fasting proinsulin. HOMA-B and HOMA-IR were both calculated from FPG and fasting insulin. Samples measured centrally were collected and transported according to detailed procedures in the MDS Pharma Services manual. Samples stored at ambient temperature were shipped by courier to the central laboratory on the same day as collection, while frozen samples were shipped every 3 weeks.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.SAFETY:\nSafety variables included hypoglycaemic episodes based on PG levels (< 3.1 mmol/l), liraglutide antibodies including cross-reacting and neutralizing antibodies, tolerability (gastrointestinal complaints) and pulse. AEs, vital signs, electrocardiogram (ECG), biochemical and haematology measures including calcitonin were also monitored. Self-treated hypoglycaemic episodes were classified as minor, while those requiring third-party assistance were considered major. Serum antibodies against liraglutide were measured by radioimmunoprecipitation assay.\n\nBODY.SUBJECTS AND METHODS.STATISTICAL ANALYSES:\nAll efficacy and safety analyses were based on intent-to-treat criteria, defined as subjects who were exposed to ≥ 1 dose of trial product(s). Efficacy endpoints were analysed by ancova with treatment, country and previous glucose-lowering treatment as fixed effects and baseline values as covariates. Missing data were imputed by last observation carried forward (LOCF). Sample size calculations were based on predicted HbA1c and body weight after trial completion. As the three liraglutide + glimepiride groups were to be compared with both rosiglitazone + glimepiride and glimepiride monotherapy, two calculations were performed. These sample size calculations assumed a standard deviation of 1.2% of HbA1c, the non-inferiority/superiority margin vs. active control was set to 0.4% and the difference to detect (superiority vs. placebo) was set to 0.5%. For body weight, a coefficient of variation of 3% (based on phase 2a trials for liraglutide) and a difference to detect of 3% were assumed. A combined power (calculated as the product of the marginal powers for HbA1c and body weight) of at least 85% was required. These calculations indicated that at least 168 and 81 patients completing the study would be needed for the combination and glimepiride monotherapy groups, respectively. Assuming a drop-out rate of 25%, targets for randomization were 228 in each of the combination therapy groups and 114 in the placebo group (total n = 1026). To protect against Type 1 errors, HbA1c was analysed using hierarchical testing for descending doses of liraglutide. First, superiority of liraglutide 1.8 mg to placebo was tested and, only if superior to placebo, non-inferiority to rosiglitazone was tested. If non-inferiority was obtained, superiority to rosiglitazone for liraglutide 1.8 mg was tested and superiority to placebo for liraglutide 1.2 mg was tested. If superiority was confirmed, non-inferiority to rosiglitazone would be tested and so on (i.e. testing sequence was stopped when hypotheses could not be rejected). Superiority was concluded when upper limits of two-sided 95% confidence intervals (CIs) for treatment differences were below 0%; non-inferiority was concluded if these values were < 0.4%; for secondary endpoints, Type 1 errors were controlled by estimating simultaneous CIs using Dunnett's method. Proportions of subjects achieving HbA1c (HbA1c < 7.0%, and ≤ 6.5%) and FPG (5.0 ≤ FPG ≤ 7.2 mmol/l) targets [13] were compared between treatments using logistic regression with allocated treatment and baseline values as covariates. Chi-square analyses assessed differences in treatments for percentages of subjects achieving no, one, two or three PPG values < 10 mmol/l [13]. Hypoglycaemic episodes were analysed under the assumption that number per subject were negatively binomially distributed using a generalized linear model, including treatment and country as fixed effects. Other safety data were compared by descriptive statistics. Values for descriptive statistics are expressed as means ± sd, while ancova results are expressed as least square means ± SEM or with 95% CI unless otherwise noted. Significance levels were set to 5% for two-sided tests and 2.5% for one-sided tests.\n\nBODY.RESULTS.DISPOSITION AND DEMOGRAPHICS:\nThe treatment groups were well balanced (Table 1). Of 1712 subjects screened, 1041 were randomized and 1040 were exposed to trial drugs; 147 subjects (14.1%) withdrew (Fig. 2). Withdrawals were higher with placebo (27%) and rosiglitazone treatment (16%) compared with liraglutide 0.6 mg (11%), liraglutide 1.2 mg (14%) and liraglutide 1.8 mg (9%) treatment. Thirty-eight subjects (3.7%) withdrew as a result of AEs (Fig. 2). Table 1 Demographic characteristics of study participants Liraglutide 0.6 mg ( n = 233) Liraglutide 1.2 mg ( n = 228) Liraglutide 1.8 mg ( n = 234) Placebo ( n = 114) Rosiglitazone ( n = 232) Male : female (%) 54 : 46 45 : 55 53 : 47 47 : 53 47 : 53 Age (years) 55.7 ± 9.9 57.7 ± 9.0 55.6 ± 10.0 54.7 ± 10.0 56.0 ± 9.8 Duration of diabetes (years) 6.5 (4.0,10.2) 6.7 (4.0,10.7) 6.5 (3.7,10.5) 6.5 (4.5,10.6) 6.6 (4.3,10.7) Previous on mono : combi (%) 30 : 70 31 : 69 27 : 73 32 : 68 32 : 68 FPG (mmol/l) 10.0 ± 2.4 9.8 ± 2.7 9.7 ± 2.4 9.5 ± 2.0 9.9 ± 2.5 HbA 1c (%) 8.4 ± 1.0 8.5 ± 1.1 8.5 ± 0.9 8.4 ± 1.0 8.4 ± 1.0 Diabetic retinopathy (%) 17.2 14.9 12.0 13.2 16.4 Hypertension (%) 69.1 68.0 69.7 64.9 66.8 BMI (kg/m 2 ) 30.0 ± 5.0 29.8 ± 5.1 30.0 ± 5.1 30.3 ± 5.4 29.4 ± 4.8 Weight (kg) 82.6 ± 17.7 80.0 ± 17.1 83.0 ± 18.1 81.9 ± 17.1 80.6 ± 17.0 Systolic blood pressure (mmHg) 131 ± 16 133 ± 15 132 ± 16 131 ± 15.3 133 ± 15 Data are mean ± sd and percentages, except for duration of diabetes, where data are median, 25th and 75th percentile. BMI, body mass index; FPG, fasting plasma glucose; HbA 1c , glycated haemoglobin; mono : combi, previous treatment with either monotherapy or combination therapy; sd , standard deviation. FIGURE 2Flow of patients through the study.\n\nBODY.RESULTS.EFFICACY.HBA:\nHbA1c decreased rapidly with all doses of liraglutide when added to glimepiride compared with either rosiglitazone or placebo (i.e. glimepiride monotherapy), irrespective of previous therapy. The greatest decreases occurred with liraglutide 1.2 and 1.8 mg (Fig. 3a–c). After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). Liraglutide 0.6 mg was non-inferior to rosiglitazone. Rosiglitazone also was superior to placebo (P < 0.0001). FIGURE 3Mean glycated haemoglobin (HbA1c) by treatment and week (intent-to-treat population with last observation carried forward): (a) overall population; (b) previously on monotherapy; or (c) previously on combination therapy; (d) mean changes in HbA1c from baseline after 26 weeks of treatment. Keys: (a–c) liraglutide 0.6 mg: grey dotted line with squares; liraglutide 1.2 mg: black solid line with triangles; liraglutide 1.8 mg: black dotted line with squares; rosiglitazone: grey solid line with circles; placebo: black solid line with circles. (d) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. ****P < 0.0001 compared with placebo; ††††P < 0.0001 compared with rosiglitazone. HbA1c decreases were greater for subjects who entered from monotherapy compared with combination therapy (Fig. 3d). However, because the increase with placebo was higher for individuals entering on combination therapy (0.7 vs. 0.23%), the differences between treatment groups in favour of liraglutide were similar irrespective of whether subjects were treated previously with monotherapy or combination therapy. Neither age, gender nor BMI affected these trends.\n\nBODY.RESULTS.EFFICACY.PERCENTAGE REACHING AN HBA:\nThe percentage of subjects reaching ADA [2] and International Diabetes Federation (IDF)/American Association of Clinical Endocrinologists (AACE) [11,12] treatment HbA1c goals with liraglutide was dose dependent (Fig. 4). At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). FIGURE 4Subjects achieving specified glycated haemoglobin (HbA1c) levels: (a) percentage reaching HbA1c < 7.0% (American Diabetes Association/European Association for the Study of Diabetes target); (b) percentage reaching HbA1c < 6.5% (International Diabetes Federation/American Association of Clinical Endocrinologists targets); (c) cumulative distribution of HbA1c at 26 weeks for the intent-to-treat (ITT) population; and (d) for the ITT last observation carried forward (LOCF) population. Keys: (a, b) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. (c, d) liraglutide 0.6 mg: pale grey solid line; liraglutide 1.2 mg: grey solid line, liraglutide 1.8 mg: black solid line; rosiglitazone: dotted black line; placebo: dotted grey line; baseline visit: long dashed black line. ****P < 0.0001 or **P < 0.01 compared with placebo; ††††P < 0.0001 or †††P = 0.0005 compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.FASTING PLASMA GLUCOSE:\nBy week 2, subjects treated with liraglutide had rapid and larger decreases in FPG vs. comparator treatment. At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001), while only liraglutide 1.2 or 1.8 mg produced greater reductions than rosiglitazone. FPG treatment differences to placebo were 1.7 mmol/l for liraglutide 0.6 mg and 2.6 mmol/l for both liraglutide 1.2 and 1.8 mg. An 0.7-mmol/l greater reduction in FPG was achieved with either liraglutide 1.2 or 1.8 mg compared with rosiglitazone (P ≤ 0.006) after 26 weeks. FIGURE 5Mean changes from baseline in fasting plasma glucose after 26 weeks of treatment. ****P < 0.0001 compared with placebo; ††P < 0.01 compared with rosiglitazone. The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). The liraglutide 1.2 and 1.8 mg treatment groups also had more subjects achieving the same FPG target at end of treatment compared with rosiglitazone (26%) (P = 0.007 and P = 0.01, respectively).\n\nBODY.RESULTS.EFFICACY.POSTPRANDIAL PLASMA GLUCOSE:\nPPG was reduced similarly after each meal. The greatest reductions in mean PPG values from baseline (average of values obtained 90 min after breakfast, lunch and evening meal) occurred with liraglutide 1.2 mg (2.5 mmol/l) and liraglutide 1.8 mg (2.7 mmol/l). By comparison, the reduction from baseline in mean PPG values was 1.8 mmol/l for rosiglitazone and liraglutide 0.6 mg and 0.4 mmol/l for placebo. Treatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001) and greater with liraglutide 1.2 mg (0.64 mmol/l; P = 0.043) and 1.8 mg (0.87 mmol/l;P = 0.0022) compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.PPG MEASUREMENTS < 10.0 MMOL/L:\nThe percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.\n\nBODY.RESULTS.BODY WEIGHT:\nMean weight at baseline was 81.6 kg. Mean reductions in weight from baseline to end of treatment were 0.2 kg with liraglutide 1.8 mg and 0.1 kg with placebo treatment, while increases occurred with either liraglutide 0.6 mg (0.7 kg), liraglutide 1.2 mg (0.3 kg) or rosiglitazone (2.1 kg) (Fig. 6). Unlike rosiglitazone, weight did not increase substantially with liraglutide and the differences between rosiglitazone and liraglutide were statistically significant (−2.3 to −1.4 kg; P < 0.0001), although there were no significant differences compared with placebo. Gender appeared to have no influence on the results, as indicated when added as a fixed effect in the ancova model. FIGURE 6Mean changes in body weight from baseline after 26 weeks of treatment. *P < 0.05 compared with placebo; ††††P < 0.0001 compared with rosiglitazone.\n\nBODY.RESULTS.INDICES OF PANCREATIC B-CELL FUNCTION AND INSULIN RESISTANCE:\nReductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051). There were no significant differences between treatments for HOMA-IR. Table 2 Selected indices of pancreatic B-cell function Variable Treatment Baseline Week 26 (LOCF) Least square difference from placebo (95% CI) Least square difference from rosiglitazone (95% CI) Proinsulin : insulin ratio Liraglutide 0.6 mg 0.42 ± 0.22 0.38 ± 0.24 −0.05 (−0.11; 0.00) −0.02 (−0.06; 0.03) Liraglutide 1.2 mg 0.45 ± 0.31 0.33 ± 0.20 −0.10 (−0.16; −0.05) † −0.07 (−0.11; −0.02) * Liraglutide 1.8 mg 0.48 ± 0.33 0.36 ± 0.20 −0.09 (−0.15; −0.03) * −0.05 (−0.10; −0.01) * Placebo 0.44 ± 0.27 0.46 ± 0.29 Rosiglitazone 0.45 ± 0.29 0.40 ± 0.20 HOMA-B (%) Liraglutide 0.6 mg 51 ± 43.3 70 ± 88.6 15 (−19.10; 49.0) 11 (−16.7; 39.0) Liraglutide 1.2 mg 71 ± 254.3 99 ± 184.3 43 (8.10; 76.9) * 39 (10.3; 67.0) * Liraglutide 1.8 mg 56 ± 84.6 91 ± 108.2 34 (−0.23; 68.5) 30 (2.00; 58.6) * Placebo 56 ± 103.3 52 ± 107.3 Rosiglitazone 46 ± 36.2 59 ± 63.3 * P ≤ 0.05; † P < 0.0001. CI, confidence interval; HOMA, homeostatis model assessment; LOCF, last observation carried forward. \n\nBODY.RESULTS.BLOOD PRESSURE AND PULSE:\nAlthough decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. Pulse increases above baseline ranged from 2 to 4 beats/min with the three doses of liraglutide and 1 beat/min with rosiglitazone, while pulse decreased by 1 beat/min with placebo. Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).\n\nBODY.RESULTS.SAFETY:\nThe most common treatment-emergent AEs that were considered by investigators to be either possibly or probably related to liraglutide were gastrointestinal (diarrhoea, nausea, dyspepsia and constipation) and nervous system disorders (headache and dizziness), particularly during the first 4 weeks. Nausea was highest with liraglutide 1.2 mg (10.5%) and lowest with placebo (1.8%). Vomiting (4.4%) and diarrhoea (7.9%) were also higher with liraglutide 1.2 mg. Withdrawals because of nausea ranged from 0.9–2.2%, vomiting 0.4–0.9% and diarrhoea 0–1.3%. Nausea was more common with liraglutide compared with placebo and rosiglitazone, particularly during the first 4 weeks (Fig. 7). Frequency of nausea was less in the liraglutide 0.6 mg treatment group compared with the higher doses of liraglutide. Generally, the occurrence of nausea dissipated from 4 to 26 weeks of treatment in all groups using liraglutide (Fig. 7). FIGURE 7Percentage of subjects experiencing nausea over the course of the study. Key: liraglutide 0.6 mg with glimepiride: black line with filled circles; liraglutide 1.2 mg with glimepiride: black line with filled triangles; liraglutide 1.8 mg with glimepiride: grey line with hollow circles; glimepiride grey lines with filled squares; rosiglitazone and glimepiride: grey line with hollow triangles. The incidence of serious AEs ranged between 3 and 5%: placebo (3%), rosiglitazone (3%), liraglutide 0.6 mg (3%), liraglutide 1.2 mg (4%) and liraglutide 1.8 mg (5%). Most treatment-emergent serious AEs were judged by investigators to be unlikely to be related to trial products. No deaths were reported during the trial. One subject developed chronic pancreatitis whilst taking liraglutide 0.6 mg; the person had no reported previous history of pancreatitis. The subject continued on liraglutide therapy and completed the trial. At screening, five patients had been previously diagnosed with pancreatitis. As pancreatitis was not an exclusion criterion, these patients were randomized as follows: one to liraglutide 0.6 mg, one to liraglutide 1.2 mg, two to liraglutide 1.8 mg and one to rosiglitazone + glimepiride. All five patients completed the trial without reporting pancreatitis as an adverse event. Hypoglycaemia was infrequent with all treatments. One major hypoglycaemic episode (self-measured blood glucose = 3.0 mmol/l) occurred 9 days after treatment started in a subject receiving liraglutide 1.8 mg in combination with glimepiride. Although medical assistance was not needed, the subject required third-party assistance. The investigator judged the episode as likely to be related to glimepiride and reduced the dose from 4 to 3 mg after the incident. Minor hypoglycaemia occurred in < 10% of subjects for any treatment. The proportion of subjects experiencing minor hypoglycaemia during the trial was lowest with placebo (i.e. glimepiride monotherapy 2.6%; 0.17 events/subject-year), comparable with liraglutide 0.6 mg (5.2%, 0.17 events/subject-year) and rosiglitazone (4.3%, 0.12 events/subject-year) groups and similar between the liraglutide 1.2 mg (9.2%, 0.51 events/subject-year) and liraglutide 1.8 mg (8.1%, 0.47 events/subject-year) treatment groups. Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values. Antibodies to liraglutide were found in 9–13% of subjects treated with liraglutide. No significant effects of these antibodies on HbA1c were found in pooled analyses of four trials including the current study. There were no clinically relevant changes in ophthalmoscopy, biochemistry, urinalysis, haematology or ECG assessments. No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.\n\nBODY.DISCUSSION:\nTreatment with liraglutide plus glimepiride was superior to glimepiride monotherapy at all doses of liraglutide and superior to rosiglitazone plus glimepiride for the two higher liraglutide doses for improving HbA1c. Similar findings for reductions in FPG and PPG highlight improved 24-h glucose control with once-daily liraglutide, with substantially more subjects reaching glycaemic targets, particularly with liraglutide 1.8 mg. Improvements in pancreatic B-cell function were larger with liraglutide 1.2 and 1.8 mg compared with rosiglitazone. Liraglutide was well tolerated and occurrence of gastrointestinal AEs was low overall, particularly after week 4. Although rates of hypoglycaemia were low in all treatment groups (< 10%), minor hypoglycaemic events occurred more often in patients treated with glimepiride plus liraglutide 1.2 or 1.8 mg than with glimepiride alone. It should be noted, however, that patients treated with liraglutide 1.2 or 1.8 mg achieved a lower HbA1c than those receiving glimepiride monotherapy. At lower HbA1c levels, sulphonylureas are known to elicit hypoglycaemia more readily than at higher levels. In clinical practice it may be possible to reduce the dose of sulphonylurea (when used with liraglutide) to minimize risk of hypoglycaemia and maintain HbA1cimprovements. Although weight effects were modest, liraglutide produced more favourable weight effects compared with rosiglitazone, which produced substantial weight gain. In other studies with liraglutide, subjects adding a 1.8-mg dose to metformin lost 2.8 kg [14], while those adding both metformin and glimepiride lost 1.8 kg compared with placebo [15] (both over 26 weeks) and those on liraglutide monotherapy (1.8 mg) lost 2.45 kg over 52 weeks [16]. In our study, because sulphonylureas usually cause weight gain, inclusion or optimization of glimepiride but not metformin may have mitigated the weight benefits typically associated with liraglutide. Lack of weight effects could be secondary to lower baseline body weight, withdrawal of previous metformin treatment or defensive snacking to minimize risk of hypoglycaemia. It might have been expected that the greater weight gain with rosiglitazone compared with liraglutide 1.8 mg would be associated with a concurrent increase in insulin resistance with rosiglitazone. The absence of this effect could reflect the insulin-sensitizing nature of rosiglitazone. Improvements in pancreatic B-cell function associated with liraglutide are consistent with other studies [7–9]. Study strengths include inclusion of both placebo and active (rosiglitazone) comparators and that OGLAs were optimized (not maximized) before randomization to minimize risk of hypoglycaemia. Limitations of the study include short duration of the trial and restriction on glimepiride and rosiglitazone in some countries that precluded maximal dosing. The impact of using other GLP-1-based treatments [such as exenatide, or the dipeptidyl peptidase-4 (DPP-4) inhibitor, sitagliptin] with sulphonylureas in subjects with T2D has been studied. In a 30-week American trial where exenatide twice a day was added to sulphonylureas, HbA1c was reduced by 0.46% from baseline with 5 μg and 0.86% with 10 μg [17] compared with 1.1% with liraglutide 1.8 or 1.2 mg. This reduction in HbA1c with liraglutide is consistent with other LEAD trials investigating liraglutide as monotherapy or in combination with various OGLA drugs. In these trials, HbA1c was reduced by 1–1.5%[14,16,18–20]. Reductions in FPG with exenatide were 0.3 and 0.6 mmol/l from baseline with 5 μg and 10 μg, respectively, compared with 1.4 mmol/l with liraglutide 1.8 mg; weight loss of 1.6 kg occurred with exenatide 10 μg compared with 0.2 kg for liraglutide 1.8 mg [17]. Differences in weight effects may be as a result of lower baseline weight in this trial (82 kg) compared with exenatide (96 kg) and discontinuation of previous metformin therapy, unlike the exenatide trial where exenatide was added to previous sulphonylurea monotherapy [17]. Other large-scale trials with liraglutide in combination with sulphonylureas have demonstrated weight loss of 2–3 kg [18,20]. Withdrawals from exenatide trials ranged from 24–30% compared with 9–14% with liraglutide in this study. Nausea with exenatide ranged from 39% with 5 μg to 51% with 10 μg [17] compared with 10.5% for liraglutide. Furthermore, 41% were positive for anti-exenatide antibodies compared with 9–13% with anti-liraglutide antibodies. With sitagliptin 100 mg once daily for 24 weeks, HbA1c decreased by 0.3% from baseline in subjects receiving glimepiride, with 11% achieving an HbA1c < 7.0%[21]. Reductions in FPG and PPG from baseline were 0.05 and 1.4 mmol/l, respectively, while weight increased by 0.8 kg and the prevalence of nausea was < 1%. Although head-to-head trials are required to test true differences between these agents, the marked effects of liraglutide on FPG may be as a result of consistent blood levels of liraglutide maintained over 24 h compared with exenatide which has to be administered 60 min before breakfast and dinner and has a half-life of 1.5–3.6 h [22]. In a recent 26-week head-to-head trial comparing liraglutide with exenatide, liraglutide produced a 0.3% greater decrease on HbA1c (P < 0.0001) [20]. Because DPP-4 inhibitors inhibit the degradation of GLP-1, the efficacy of sitagliptin is dependent on levels of endogenous GLP-1 which is physiologically low compared with the much higher pharmacological levels of liraglutide. Pharmacological levels may be needed to induce satiety, weight loss and possibly larger HbA1c reductions. Liraglutide is an effective and well-tolerated once-daily human GLP-1 analogue that improves overall glycaemic control and indices of pancreatic B-cell function with minimal weight gain and risk of hypoglycaemia when used in combination with a sulphonylurea for T2D.\n\nBODY.COMPETING INTERESTS:\nThe study was funded by Novo Nordisk, the manufacturer of liraglutide. In collaboration with the investigators, Novo Nordisk was responsible for the study design, protocol, statistical analysis plans, oversight, analysis and reporting of the results. Data were recorded at the clinical centres and maintained by the sponsor. The LEAD-1 SU study group had full access to the data. Final responsibility for the decision to submit the manuscript for publication was the authors. MM has received lecture fees from Novo Nordisk, Servier, MSD; JS has received honoraria, grants and lecture fees from Novo Nordisk; MB, WMWB and NAK have no conflicts to declare; JS has received lecture fees from Novo Nordisk; MZ is employed by, and holds stock in, Novo Nordisk; TLT is employed by Novo Nordisk; SC is a member of the international advisory board on liraglutide for Novo Nordisk and has received lecture fees from Novo Nordisk.\n\n**Question:** Compared to Placebo plus glimepiride what was the result of Liraglutide (1.2 mg) plus glimepiride on Patients reaching HbA1c goals less than 7.0% and equal or less than 6.5%?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
100
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Liraglutide, a once-daily human GLP-1 analogue, added to a sulphonylurea over 26 weeks produces greater improvements in glycaemic and weight control compared with adding rosiglitazone or placebo in subjects with Type 2 diabetes (LEAD-1 SU)\n\n ABSTRACT.AIM:\nTo compare the effects of combining liraglutide (0.6, 1.2 or 1.8 mg/day) or rosiglitazone 4 mg/day (all n ≥ 228) or placebo (n = 114) with glimepiride (2–4 mg/day) on glycaemic control, body weight and safety in Type 2 diabetes.\n\nABSTRACT.METHODS:\nIn total, 1041 adults (mean ± sd), age 56 ± 10 years, weight 82 ± 17 kg and glycated haemoglobin (HbA1c) 8.4 ± 1.0% at 116 sites in 21 countries were stratified based on previous oral glucose-lowering mono : combination therapies (30 : 70%) to participate in a five-arm, 26-week, double-dummy, randomized study.\n\nABSTRACT.RESULTS:\nLiraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%) or rosiglitazone (−0.4%, P < 0.0001, baseline 8.4%) when added to glimepiride. Liraglutide 0.6 mg was less effective (−0.6%, baseline 8.4%). Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l). Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). Changes in body weight with liraglutide 1.8 mg (−0.2 kg, baseline 83.0 kg), 1.2 mg (+0.3 kg, baseline 80.0 kg) or placebo (−0.1 kg, baseline 81.9 kg) were less than with rosiglitazone (+2.1 kg, P < 0.0001, baseline 80.6 kg). Main adverse events for all treatments were minor hypoglycaemia (< 10%), nausea (< 11%), vomiting (< 5%) and diarrhoea (< 8%).\n\nABSTRACT.CONCLUSIONS:\nLiraglutide added to glimepiride was well tolerated and provided improved glycaemic control and favourable weight profile.\n\nBODY.INTRODUCTION:\nMost drugs that target Type 2 diabetes (T2D) also cause weight gain or hypoglycaemia, or both, with the risk increasing with combination therapy. Glucagon-like peptide-1 (GLP-1)-based therapies stimulate insulin secretion and reduce glucagon secretion only during hyperglycaemia. GLP-1 also slows gastric emptying and reduces appetite [1]. Although American Diabetes Association (ADA)/European Association for the Study of Diabetes (EASD) guidelines recommend lifestyle and metformin as initial therapy for T2D [2], sulphonylureas are used widely, particularly when metformin or thiazolidinediones are not tolerated. Glycaemic control eventually deteriorates with sulphonylureas while hypoglycaemia and weight gain are common [3]. Incretin therapy improves glycaemic control with low hypoglycaemic risk, while delayed gastric emptying and reduced appetite can reduce weight [1,4]. Liraglutide is a once-daily human GLP-1 analogue with 97% linear amino-acid sequence homology to human GLP-1 [5] and half-life of 13 h after subcutaneous administration that produces 24-h blood glucose control [6]. Liraglutide monotherapy for 14 weeks reduced glycated haemoglobin (HbA1c) by 1.7% and fasting plasma glucose (FPG) by 3.4 mmol/l without causing hypoglycaemia, along with weight loss (∼3 kg) compared with placebo [7]. Improvements in pancreatic B-cell function [7–9] and blood pressure [7], along with decreased glucagon secretion [7,10], also occurred. As part of the phase 3 programme [the Liraglutide Effect and Action in Diabetes (LEAD) programme] with liraglutide in > 4000 subjects with T2D as monotherapy or in combination therapy, this 26-week trial examined liraglutide plus glimepiride compared with either placebo or rosiglitazone added to glimepiride on glycaemic control and body weight.\n\nBODY.SUBJECTS AND METHODS.STUDY PARTICIPANTS:\nInclusion criteria: T2D treated with oral glucose-lowering agents (OGLAs) for ≥ 3 months; 18–80 years of age; HbA1c 7.0–11.0% (previous OGLA monotherapy) or 7.0–10.0% (previous OGLA combination therapy); body mass index (BMI) ≤ 45.0 kg/m2. Exclusion criteria: used insulin within 3 months, impaired liver or renal function, uncontrolled hypertension (≥ 180/100 mmHg), cancer or used any drugs apart from OGLAs likely to affect glucose concentrations. Subjects provided written informed consent. The study was conducted in accordance with good clinical practice guidelines and approved by independent ethics committees.\n\nBODY.SUBJECTS AND METHODS.STUDY DESIGN:\nThe study was a 26-week, double-blind, double-dummy, randomized, active-control, five-armed parallel (116 sites in 21 countries, primarily Europe and Asia) trial enrolling 1041 subjects (1–37 subjects per centre), all receiving glimepiride (2–4 mg/day) in combination with (Fig. 1): FIGURE 1Overview of trial design and treatment arms. one of three liraglutide doses [0.6, 1.2 or 1.8 mg, injected subcutaneously (Novo Nordisk, Bagsvaerd, Denmark) and rosiglitazone placebo];liraglutide placebo and rosiglitazone placebo;liraglutide placebo and rosiglitazone 4 mg/day (rosiglitazone; AvandiaTM; GlaxoSmithKline, London, UK). The doses of rosiglitazone and glimepiride used were determined by the highest doses approved in all participating counties. After discontinuing previous OGLAs except glimepiride, separate 2-week titration and maintenance periods with glimepiride (open-label) preceded randomization (Fig. 1). Subjects were stratified according to previous treatment (monotherapy or combination therapy). After randomization, 2-week treatment titration and 24-week treatment (maintenance) phases (Fig. 1) were completed. Liraglutide was up-titrated weekly in 0.6-mg increments until allocated doses were reached. Glimepiride could be adjusted between 2 and 4 mg/day in case of hypoglycaemia or other adverse events (AEs), while other drug doses were fixed. Liraglutide (active and placebo) was supplied in 3-ml pre-filled pens with 31G needles (Novo Nordisk). Subjects were encouraged to inject liraglutide into the upper arm, thigh or abdomen at the same time each day. Rosiglitazone and glimepiride were taken in the morning or with the first meal.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.EFFICACY:\nThe primary endpoint was change from baseline HbA1c after 26 weeks of treatment. Secondary endpoints included: percentages of subjects reaching HbA1c (< 7.0%, ≤ 6.5%), FPG (5.0 to ≤ 7.2 mmol/l) and postprandial plasma glucose (PPG; 10.0 mmol/l) targets [11–13]; changes in body weight, FPG, mean PPG, indices of pancreatic B-cell function [pro-insulin : insulin ratio and homeostasis model assessment (HOMA)-B], HOMA-insulin resistance (HOMA-IR) and blood pressure (BP). HbA1c was measured centrally (MDS Pharma Services, King of Prussia, PA, USA) by high performance liquid chromatography while plasma glucose (PG) was self-measured using MediSense® glucose meters (Abbott Diagnostics Inc., Abbott Park, IL, USA). Insulin and C-peptide were measured by chemiluminescence, proinsulin by ELISA, while glucagon was measured in aprotinin-treated plasma by radioimmunoassay. The proinsulin : insulin ratio was calculated from fasting insulin and fasting proinsulin. HOMA-B and HOMA-IR were both calculated from FPG and fasting insulin. Samples measured centrally were collected and transported according to detailed procedures in the MDS Pharma Services manual. Samples stored at ambient temperature were shipped by courier to the central laboratory on the same day as collection, while frozen samples were shipped every 3 weeks.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.SAFETY:\nSafety variables included hypoglycaemic episodes based on PG levels (< 3.1 mmol/l), liraglutide antibodies including cross-reacting and neutralizing antibodies, tolerability (gastrointestinal complaints) and pulse. AEs, vital signs, electrocardiogram (ECG), biochemical and haematology measures including calcitonin were also monitored. Self-treated hypoglycaemic episodes were classified as minor, while those requiring third-party assistance were considered major. Serum antibodies against liraglutide were measured by radioimmunoprecipitation assay.\n\nBODY.SUBJECTS AND METHODS.STATISTICAL ANALYSES:\nAll efficacy and safety analyses were based on intent-to-treat criteria, defined as subjects who were exposed to ≥ 1 dose of trial product(s). Efficacy endpoints were analysed by ancova with treatment, country and previous glucose-lowering treatment as fixed effects and baseline values as covariates. Missing data were imputed by last observation carried forward (LOCF). Sample size calculations were based on predicted HbA1c and body weight after trial completion. As the three liraglutide + glimepiride groups were to be compared with both rosiglitazone + glimepiride and glimepiride monotherapy, two calculations were performed. These sample size calculations assumed a standard deviation of 1.2% of HbA1c, the non-inferiority/superiority margin vs. active control was set to 0.4% and the difference to detect (superiority vs. placebo) was set to 0.5%. For body weight, a coefficient of variation of 3% (based on phase 2a trials for liraglutide) and a difference to detect of 3% were assumed. A combined power (calculated as the product of the marginal powers for HbA1c and body weight) of at least 85% was required. These calculations indicated that at least 168 and 81 patients completing the study would be needed for the combination and glimepiride monotherapy groups, respectively. Assuming a drop-out rate of 25%, targets for randomization were 228 in each of the combination therapy groups and 114 in the placebo group (total n = 1026). To protect against Type 1 errors, HbA1c was analysed using hierarchical testing for descending doses of liraglutide. First, superiority of liraglutide 1.8 mg to placebo was tested and, only if superior to placebo, non-inferiority to rosiglitazone was tested. If non-inferiority was obtained, superiority to rosiglitazone for liraglutide 1.8 mg was tested and superiority to placebo for liraglutide 1.2 mg was tested. If superiority was confirmed, non-inferiority to rosiglitazone would be tested and so on (i.e. testing sequence was stopped when hypotheses could not be rejected). Superiority was concluded when upper limits of two-sided 95% confidence intervals (CIs) for treatment differences were below 0%; non-inferiority was concluded if these values were < 0.4%; for secondary endpoints, Type 1 errors were controlled by estimating simultaneous CIs using Dunnett's method. Proportions of subjects achieving HbA1c (HbA1c < 7.0%, and ≤ 6.5%) and FPG (5.0 ≤ FPG ≤ 7.2 mmol/l) targets [13] were compared between treatments using logistic regression with allocated treatment and baseline values as covariates. Chi-square analyses assessed differences in treatments for percentages of subjects achieving no, one, two or three PPG values < 10 mmol/l [13]. Hypoglycaemic episodes were analysed under the assumption that number per subject were negatively binomially distributed using a generalized linear model, including treatment and country as fixed effects. Other safety data were compared by descriptive statistics. Values for descriptive statistics are expressed as means ± sd, while ancova results are expressed as least square means ± SEM or with 95% CI unless otherwise noted. Significance levels were set to 5% for two-sided tests and 2.5% for one-sided tests.\n\nBODY.RESULTS.DISPOSITION AND DEMOGRAPHICS:\nThe treatment groups were well balanced (Table 1). Of 1712 subjects screened, 1041 were randomized and 1040 were exposed to trial drugs; 147 subjects (14.1%) withdrew (Fig. 2). Withdrawals were higher with placebo (27%) and rosiglitazone treatment (16%) compared with liraglutide 0.6 mg (11%), liraglutide 1.2 mg (14%) and liraglutide 1.8 mg (9%) treatment. Thirty-eight subjects (3.7%) withdrew as a result of AEs (Fig. 2). Table 1 Demographic characteristics of study participants Liraglutide 0.6 mg ( n = 233) Liraglutide 1.2 mg ( n = 228) Liraglutide 1.8 mg ( n = 234) Placebo ( n = 114) Rosiglitazone ( n = 232) Male : female (%) 54 : 46 45 : 55 53 : 47 47 : 53 47 : 53 Age (years) 55.7 ± 9.9 57.7 ± 9.0 55.6 ± 10.0 54.7 ± 10.0 56.0 ± 9.8 Duration of diabetes (years) 6.5 (4.0,10.2) 6.7 (4.0,10.7) 6.5 (3.7,10.5) 6.5 (4.5,10.6) 6.6 (4.3,10.7) Previous on mono : combi (%) 30 : 70 31 : 69 27 : 73 32 : 68 32 : 68 FPG (mmol/l) 10.0 ± 2.4 9.8 ± 2.7 9.7 ± 2.4 9.5 ± 2.0 9.9 ± 2.5 HbA 1c (%) 8.4 ± 1.0 8.5 ± 1.1 8.5 ± 0.9 8.4 ± 1.0 8.4 ± 1.0 Diabetic retinopathy (%) 17.2 14.9 12.0 13.2 16.4 Hypertension (%) 69.1 68.0 69.7 64.9 66.8 BMI (kg/m 2 ) 30.0 ± 5.0 29.8 ± 5.1 30.0 ± 5.1 30.3 ± 5.4 29.4 ± 4.8 Weight (kg) 82.6 ± 17.7 80.0 ± 17.1 83.0 ± 18.1 81.9 ± 17.1 80.6 ± 17.0 Systolic blood pressure (mmHg) 131 ± 16 133 ± 15 132 ± 16 131 ± 15.3 133 ± 15 Data are mean ± sd and percentages, except for duration of diabetes, where data are median, 25th and 75th percentile. BMI, body mass index; FPG, fasting plasma glucose; HbA 1c , glycated haemoglobin; mono : combi, previous treatment with either monotherapy or combination therapy; sd , standard deviation. FIGURE 2Flow of patients through the study.\n\nBODY.RESULTS.EFFICACY.HBA:\nHbA1c decreased rapidly with all doses of liraglutide when added to glimepiride compared with either rosiglitazone or placebo (i.e. glimepiride monotherapy), irrespective of previous therapy. The greatest decreases occurred with liraglutide 1.2 and 1.8 mg (Fig. 3a–c). After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). Liraglutide 0.6 mg was non-inferior to rosiglitazone. Rosiglitazone also was superior to placebo (P < 0.0001). FIGURE 3Mean glycated haemoglobin (HbA1c) by treatment and week (intent-to-treat population with last observation carried forward): (a) overall population; (b) previously on monotherapy; or (c) previously on combination therapy; (d) mean changes in HbA1c from baseline after 26 weeks of treatment. Keys: (a–c) liraglutide 0.6 mg: grey dotted line with squares; liraglutide 1.2 mg: black solid line with triangles; liraglutide 1.8 mg: black dotted line with squares; rosiglitazone: grey solid line with circles; placebo: black solid line with circles. (d) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. ****P < 0.0001 compared with placebo; ††††P < 0.0001 compared with rosiglitazone. HbA1c decreases were greater for subjects who entered from monotherapy compared with combination therapy (Fig. 3d). However, because the increase with placebo was higher for individuals entering on combination therapy (0.7 vs. 0.23%), the differences between treatment groups in favour of liraglutide were similar irrespective of whether subjects were treated previously with monotherapy or combination therapy. Neither age, gender nor BMI affected these trends.\n\nBODY.RESULTS.EFFICACY.PERCENTAGE REACHING AN HBA:\nThe percentage of subjects reaching ADA [2] and International Diabetes Federation (IDF)/American Association of Clinical Endocrinologists (AACE) [11,12] treatment HbA1c goals with liraglutide was dose dependent (Fig. 4). At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). FIGURE 4Subjects achieving specified glycated haemoglobin (HbA1c) levels: (a) percentage reaching HbA1c < 7.0% (American Diabetes Association/European Association for the Study of Diabetes target); (b) percentage reaching HbA1c < 6.5% (International Diabetes Federation/American Association of Clinical Endocrinologists targets); (c) cumulative distribution of HbA1c at 26 weeks for the intent-to-treat (ITT) population; and (d) for the ITT last observation carried forward (LOCF) population. Keys: (a, b) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. (c, d) liraglutide 0.6 mg: pale grey solid line; liraglutide 1.2 mg: grey solid line, liraglutide 1.8 mg: black solid line; rosiglitazone: dotted black line; placebo: dotted grey line; baseline visit: long dashed black line. ****P < 0.0001 or **P < 0.01 compared with placebo; ††††P < 0.0001 or †††P = 0.0005 compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.FASTING PLASMA GLUCOSE:\nBy week 2, subjects treated with liraglutide had rapid and larger decreases in FPG vs. comparator treatment. At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001), while only liraglutide 1.2 or 1.8 mg produced greater reductions than rosiglitazone. FPG treatment differences to placebo were 1.7 mmol/l for liraglutide 0.6 mg and 2.6 mmol/l for both liraglutide 1.2 and 1.8 mg. An 0.7-mmol/l greater reduction in FPG was achieved with either liraglutide 1.2 or 1.8 mg compared with rosiglitazone (P ≤ 0.006) after 26 weeks. FIGURE 5Mean changes from baseline in fasting plasma glucose after 26 weeks of treatment. ****P < 0.0001 compared with placebo; ††P < 0.01 compared with rosiglitazone. The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). The liraglutide 1.2 and 1.8 mg treatment groups also had more subjects achieving the same FPG target at end of treatment compared with rosiglitazone (26%) (P = 0.007 and P = 0.01, respectively).\n\nBODY.RESULTS.EFFICACY.POSTPRANDIAL PLASMA GLUCOSE:\nPPG was reduced similarly after each meal. The greatest reductions in mean PPG values from baseline (average of values obtained 90 min after breakfast, lunch and evening meal) occurred with liraglutide 1.2 mg (2.5 mmol/l) and liraglutide 1.8 mg (2.7 mmol/l). By comparison, the reduction from baseline in mean PPG values was 1.8 mmol/l for rosiglitazone and liraglutide 0.6 mg and 0.4 mmol/l for placebo. Treatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001) and greater with liraglutide 1.2 mg (0.64 mmol/l; P = 0.043) and 1.8 mg (0.87 mmol/l;P = 0.0022) compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.PPG MEASUREMENTS < 10.0 MMOL/L:\nThe percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.\n\nBODY.RESULTS.BODY WEIGHT:\nMean weight at baseline was 81.6 kg. Mean reductions in weight from baseline to end of treatment were 0.2 kg with liraglutide 1.8 mg and 0.1 kg with placebo treatment, while increases occurred with either liraglutide 0.6 mg (0.7 kg), liraglutide 1.2 mg (0.3 kg) or rosiglitazone (2.1 kg) (Fig. 6). Unlike rosiglitazone, weight did not increase substantially with liraglutide and the differences between rosiglitazone and liraglutide were statistically significant (−2.3 to −1.4 kg; P < 0.0001), although there were no significant differences compared with placebo. Gender appeared to have no influence on the results, as indicated when added as a fixed effect in the ancova model. FIGURE 6Mean changes in body weight from baseline after 26 weeks of treatment. *P < 0.05 compared with placebo; ††††P < 0.0001 compared with rosiglitazone.\n\nBODY.RESULTS.INDICES OF PANCREATIC B-CELL FUNCTION AND INSULIN RESISTANCE:\nReductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051). There were no significant differences between treatments for HOMA-IR. Table 2 Selected indices of pancreatic B-cell function Variable Treatment Baseline Week 26 (LOCF) Least square difference from placebo (95% CI) Least square difference from rosiglitazone (95% CI) Proinsulin : insulin ratio Liraglutide 0.6 mg 0.42 ± 0.22 0.38 ± 0.24 −0.05 (−0.11; 0.00) −0.02 (−0.06; 0.03) Liraglutide 1.2 mg 0.45 ± 0.31 0.33 ± 0.20 −0.10 (−0.16; −0.05) † −0.07 (−0.11; −0.02) * Liraglutide 1.8 mg 0.48 ± 0.33 0.36 ± 0.20 −0.09 (−0.15; −0.03) * −0.05 (−0.10; −0.01) * Placebo 0.44 ± 0.27 0.46 ± 0.29 Rosiglitazone 0.45 ± 0.29 0.40 ± 0.20 HOMA-B (%) Liraglutide 0.6 mg 51 ± 43.3 70 ± 88.6 15 (−19.10; 49.0) 11 (−16.7; 39.0) Liraglutide 1.2 mg 71 ± 254.3 99 ± 184.3 43 (8.10; 76.9) * 39 (10.3; 67.0) * Liraglutide 1.8 mg 56 ± 84.6 91 ± 108.2 34 (−0.23; 68.5) 30 (2.00; 58.6) * Placebo 56 ± 103.3 52 ± 107.3 Rosiglitazone 46 ± 36.2 59 ± 63.3 * P ≤ 0.05; † P < 0.0001. CI, confidence interval; HOMA, homeostatis model assessment; LOCF, last observation carried forward. \n\nBODY.RESULTS.BLOOD PRESSURE AND PULSE:\nAlthough decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. Pulse increases above baseline ranged from 2 to 4 beats/min with the three doses of liraglutide and 1 beat/min with rosiglitazone, while pulse decreased by 1 beat/min with placebo. Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).\n\nBODY.RESULTS.SAFETY:\nThe most common treatment-emergent AEs that were considered by investigators to be either possibly or probably related to liraglutide were gastrointestinal (diarrhoea, nausea, dyspepsia and constipation) and nervous system disorders (headache and dizziness), particularly during the first 4 weeks. Nausea was highest with liraglutide 1.2 mg (10.5%) and lowest with placebo (1.8%). Vomiting (4.4%) and diarrhoea (7.9%) were also higher with liraglutide 1.2 mg. Withdrawals because of nausea ranged from 0.9–2.2%, vomiting 0.4–0.9% and diarrhoea 0–1.3%. Nausea was more common with liraglutide compared with placebo and rosiglitazone, particularly during the first 4 weeks (Fig. 7). Frequency of nausea was less in the liraglutide 0.6 mg treatment group compared with the higher doses of liraglutide. Generally, the occurrence of nausea dissipated from 4 to 26 weeks of treatment in all groups using liraglutide (Fig. 7). FIGURE 7Percentage of subjects experiencing nausea over the course of the study. Key: liraglutide 0.6 mg with glimepiride: black line with filled circles; liraglutide 1.2 mg with glimepiride: black line with filled triangles; liraglutide 1.8 mg with glimepiride: grey line with hollow circles; glimepiride grey lines with filled squares; rosiglitazone and glimepiride: grey line with hollow triangles. The incidence of serious AEs ranged between 3 and 5%: placebo (3%), rosiglitazone (3%), liraglutide 0.6 mg (3%), liraglutide 1.2 mg (4%) and liraglutide 1.8 mg (5%). Most treatment-emergent serious AEs were judged by investigators to be unlikely to be related to trial products. No deaths were reported during the trial. One subject developed chronic pancreatitis whilst taking liraglutide 0.6 mg; the person had no reported previous history of pancreatitis. The subject continued on liraglutide therapy and completed the trial. At screening, five patients had been previously diagnosed with pancreatitis. As pancreatitis was not an exclusion criterion, these patients were randomized as follows: one to liraglutide 0.6 mg, one to liraglutide 1.2 mg, two to liraglutide 1.8 mg and one to rosiglitazone + glimepiride. All five patients completed the trial without reporting pancreatitis as an adverse event. Hypoglycaemia was infrequent with all treatments. One major hypoglycaemic episode (self-measured blood glucose = 3.0 mmol/l) occurred 9 days after treatment started in a subject receiving liraglutide 1.8 mg in combination with glimepiride. Although medical assistance was not needed, the subject required third-party assistance. The investigator judged the episode as likely to be related to glimepiride and reduced the dose from 4 to 3 mg after the incident. Minor hypoglycaemia occurred in < 10% of subjects for any treatment. The proportion of subjects experiencing minor hypoglycaemia during the trial was lowest with placebo (i.e. glimepiride monotherapy 2.6%; 0.17 events/subject-year), comparable with liraglutide 0.6 mg (5.2%, 0.17 events/subject-year) and rosiglitazone (4.3%, 0.12 events/subject-year) groups and similar between the liraglutide 1.2 mg (9.2%, 0.51 events/subject-year) and liraglutide 1.8 mg (8.1%, 0.47 events/subject-year) treatment groups. Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values. Antibodies to liraglutide were found in 9–13% of subjects treated with liraglutide. No significant effects of these antibodies on HbA1c were found in pooled analyses of four trials including the current study. There were no clinically relevant changes in ophthalmoscopy, biochemistry, urinalysis, haematology or ECG assessments. No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.\n\nBODY.DISCUSSION:\nTreatment with liraglutide plus glimepiride was superior to glimepiride monotherapy at all doses of liraglutide and superior to rosiglitazone plus glimepiride for the two higher liraglutide doses for improving HbA1c. Similar findings for reductions in FPG and PPG highlight improved 24-h glucose control with once-daily liraglutide, with substantially more subjects reaching glycaemic targets, particularly with liraglutide 1.8 mg. Improvements in pancreatic B-cell function were larger with liraglutide 1.2 and 1.8 mg compared with rosiglitazone. Liraglutide was well tolerated and occurrence of gastrointestinal AEs was low overall, particularly after week 4. Although rates of hypoglycaemia were low in all treatment groups (< 10%), minor hypoglycaemic events occurred more often in patients treated with glimepiride plus liraglutide 1.2 or 1.8 mg than with glimepiride alone. It should be noted, however, that patients treated with liraglutide 1.2 or 1.8 mg achieved a lower HbA1c than those receiving glimepiride monotherapy. At lower HbA1c levels, sulphonylureas are known to elicit hypoglycaemia more readily than at higher levels. In clinical practice it may be possible to reduce the dose of sulphonylurea (when used with liraglutide) to minimize risk of hypoglycaemia and maintain HbA1cimprovements. Although weight effects were modest, liraglutide produced more favourable weight effects compared with rosiglitazone, which produced substantial weight gain. In other studies with liraglutide, subjects adding a 1.8-mg dose to metformin lost 2.8 kg [14], while those adding both metformin and glimepiride lost 1.8 kg compared with placebo [15] (both over 26 weeks) and those on liraglutide monotherapy (1.8 mg) lost 2.45 kg over 52 weeks [16]. In our study, because sulphonylureas usually cause weight gain, inclusion or optimization of glimepiride but not metformin may have mitigated the weight benefits typically associated with liraglutide. Lack of weight effects could be secondary to lower baseline body weight, withdrawal of previous metformin treatment or defensive snacking to minimize risk of hypoglycaemia. It might have been expected that the greater weight gain with rosiglitazone compared with liraglutide 1.8 mg would be associated with a concurrent increase in insulin resistance with rosiglitazone. The absence of this effect could reflect the insulin-sensitizing nature of rosiglitazone. Improvements in pancreatic B-cell function associated with liraglutide are consistent with other studies [7–9]. Study strengths include inclusion of both placebo and active (rosiglitazone) comparators and that OGLAs were optimized (not maximized) before randomization to minimize risk of hypoglycaemia. Limitations of the study include short duration of the trial and restriction on glimepiride and rosiglitazone in some countries that precluded maximal dosing. The impact of using other GLP-1-based treatments [such as exenatide, or the dipeptidyl peptidase-4 (DPP-4) inhibitor, sitagliptin] with sulphonylureas in subjects with T2D has been studied. In a 30-week American trial where exenatide twice a day was added to sulphonylureas, HbA1c was reduced by 0.46% from baseline with 5 μg and 0.86% with 10 μg [17] compared with 1.1% with liraglutide 1.8 or 1.2 mg. This reduction in HbA1c with liraglutide is consistent with other LEAD trials investigating liraglutide as monotherapy or in combination with various OGLA drugs. In these trials, HbA1c was reduced by 1–1.5%[14,16,18–20]. Reductions in FPG with exenatide were 0.3 and 0.6 mmol/l from baseline with 5 μg and 10 μg, respectively, compared with 1.4 mmol/l with liraglutide 1.8 mg; weight loss of 1.6 kg occurred with exenatide 10 μg compared with 0.2 kg for liraglutide 1.8 mg [17]. Differences in weight effects may be as a result of lower baseline weight in this trial (82 kg) compared with exenatide (96 kg) and discontinuation of previous metformin therapy, unlike the exenatide trial where exenatide was added to previous sulphonylurea monotherapy [17]. Other large-scale trials with liraglutide in combination with sulphonylureas have demonstrated weight loss of 2–3 kg [18,20]. Withdrawals from exenatide trials ranged from 24–30% compared with 9–14% with liraglutide in this study. Nausea with exenatide ranged from 39% with 5 μg to 51% with 10 μg [17] compared with 10.5% for liraglutide. Furthermore, 41% were positive for anti-exenatide antibodies compared with 9–13% with anti-liraglutide antibodies. With sitagliptin 100 mg once daily for 24 weeks, HbA1c decreased by 0.3% from baseline in subjects receiving glimepiride, with 11% achieving an HbA1c < 7.0%[21]. Reductions in FPG and PPG from baseline were 0.05 and 1.4 mmol/l, respectively, while weight increased by 0.8 kg and the prevalence of nausea was < 1%. Although head-to-head trials are required to test true differences between these agents, the marked effects of liraglutide on FPG may be as a result of consistent blood levels of liraglutide maintained over 24 h compared with exenatide which has to be administered 60 min before breakfast and dinner and has a half-life of 1.5–3.6 h [22]. In a recent 26-week head-to-head trial comparing liraglutide with exenatide, liraglutide produced a 0.3% greater decrease on HbA1c (P < 0.0001) [20]. Because DPP-4 inhibitors inhibit the degradation of GLP-1, the efficacy of sitagliptin is dependent on levels of endogenous GLP-1 which is physiologically low compared with the much higher pharmacological levels of liraglutide. Pharmacological levels may be needed to induce satiety, weight loss and possibly larger HbA1c reductions. Liraglutide is an effective and well-tolerated once-daily human GLP-1 analogue that improves overall glycaemic control and indices of pancreatic B-cell function with minimal weight gain and risk of hypoglycaemia when used in combination with a sulphonylurea for T2D.\n\nBODY.COMPETING INTERESTS:\nThe study was funded by Novo Nordisk, the manufacturer of liraglutide. In collaboration with the investigators, Novo Nordisk was responsible for the study design, protocol, statistical analysis plans, oversight, analysis and reporting of the results. Data were recorded at the clinical centres and maintained by the sponsor. The LEAD-1 SU study group had full access to the data. Final responsibility for the decision to submit the manuscript for publication was the authors. MM has received lecture fees from Novo Nordisk, Servier, MSD; JS has received honoraria, grants and lecture fees from Novo Nordisk; MB, WMWB and NAK have no conflicts to declare; JS has received lecture fees from Novo Nordisk; MZ is employed by, and holds stock in, Novo Nordisk; TLT is employed by Novo Nordisk; SC is a member of the international advisory board on liraglutide for Novo Nordisk and has received lecture fees from Novo Nordisk.\n\n**Question:** Compared to Rosiglitazone plus glimepiride what was the result of Liraglutide (1.2 mg) plus glimepiride on HbA1c level at 26 weeks?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
204
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Immunogenicity and Smoking Cessation Outcomes for a Novel Nicotine Immunotherapeutic\n\n ABSTRACT:\nNicVAX®, a nicotine vaccine (3'AmNic-rEPA), has been clinically evaluated to determine if higher antibody concentrations are associated with higher smoking abstinence rates and if doses and frequency of administration are associated with increased antibody response. This randomized, double-blinded, placebo-controlled multicenter clinical trial (N=301 smokers) tested 200 and 400 μg doses administered 4 or 5 times over 6 months compared to placebo. 3'AmNic-rEPA recipients with the highest serum anti-nicotine antibody response (top 30% by AUC) were significantly more likely to attain 8 weeks continuous abstinence from weeks 19 through 26 than the placebo recipients (24.6% vs. 12.0%, p=0.024, OR=2.69, 95% CI, 1.14–6.37). The 5 injection 400 μg dose regimen had the greatest antibody response and had significantly higher abstinence rates than placebo. This study demonstrates proof-of-concept that 3'AmNic-rEPA elicits antibodies to nicotine and is associated with higher continuous abstinence rates, justifying its further development as a treatment for nicotine dependence.\n\nBODY.INTRODUCTION:\nWorldwide, smoking prevalence is 1.2 billion and approximately 5 million people die each year of smoking caused illnesses [1]. The global rate of smoking and smoking related deaths is anticipated to increase over the next 20 years unless significant public health measures are instituted. These include effective cessation interventions such as pharmacological treatments, which improve cessation rates by 1.5 to 3 fold over placebo intervention [2, 3]. Approved pharmacotherapies (e.g., nicotine replacements, bupropion SR, varenicline) for smoking cessation act on the central nervous system, each with a different mechanism of action. Other novel medications are being developed including immunotherapeutics targeting nicotine. Nicotine conjugate vaccines stimulate the immune system to develop nicotine specific antibodies (Abs) using an immunogen comprised of nicotine covalently linked to a larger carrier protein. Conceptually, the mechanism of action is that anti-nicotine antibodies bind nicotine molecules and the resulting complex is too large to cross the blood-brain barrier. With increasing Ab levels more nicotine is captured and sequestered in the blood and prevented from entering the brain, leading to less reinforcing effects from nicotine. Animal studies have demonstrated that passive or active immunization results in approximately 30% to 90% less nicotine entering the brain compared to control rats [4–7] and attenuated locomotor [4, 5] and behavioral [8, 9] responses to nicotine. Furthermore, vaccination reduced nicotine elimination from the body in a study with rats [10, 11], which may also contribute to reduced smoking. Although human studies are limited, published data evaluating different nicotine vaccines support the general concept that nicotine vaccines can be effective for smoking cessation in some smokers [12, 13]. Unfortunately, these studies either had small sample sizes [12], did not use an intent-to-treat population of smokers [13] or did not perform statistical analysis of the data [14]. The primary aim of the present study was to establish the proof-of-concept that (i) anti-nicotine antibodies are useful as an aid to smoking cessation and (ii) higher serum anti-nicotine antibody concentrations are associated with higher abstinence rates in an intent-to-treat population of smokers. One of the challenges with immunotherapeutics, such as vaccines, is attainment of therapeutic levels of antibody in most people. Therefore, this study tested two different doses of 3'-aminomethylnicotine P. aeruginosa r-exoprotein A - NicVAX (3'AmNic-rEPA) to identify a dose and schedule for further development: 200 and 400 μg across two different schedules (4 or 5 injections) compared to placebo for immunogenicity, efficacy and safety.\n\nBODY.RESULTS:\nA total of 301 subjects were randomized. Figure 2 shows the disposition and number of subjects within each treatment group. No significant group differences were observed in the demographic or smoking history by treatment or antibody levels (see Table 1).\n\nBODY.RESULTS.COMPLIANCE:\nAll 301 subjects received injection 1, 96.7%, 84.1%, 72.4% and 61.2% received injections 2, 3, 4 and 5 (Schedule 2 subjects only), respectively. No significant differences were observed across treatment groups for subjects receiving injections 2 through 4 for Schedules 1 and 2. Mean in-study duration was 286 ± 121 days.\n\nBODY.RESULTS.PROOF-OF-CONCEPT.EFFECTS OF HIGH AB ON ABSTINENCE:\nHigh Ab responders to 3'AmNic-rEPA were defined as the top 30% of responders by AUC (0 to 26 weeks) and the low Ab group as the remaining bottom 70% of responders. 3'AmNic-rEPA recipients in the high Ab group were significantly more likely to attain 8 weeks of continuous abstinence from weeks 19 through 26 than placebo (24.6% vs. 12.0%, p=0.024, OR=2.69, 95% CI, 1.14–6.37). No significant differences were observed between the 3'AmNic-rEPA low Ab group vs. placebo (9.3% vs. 12.0%., p=0.46). As a secondary outcome, continuous abstinence rate (CAR) to 52 weeks were evaluated from weeks 19 to 52 and were significantly higher for the high Ab group vs. placebo (19.7% vs. 10.0%, p=0.044, OR=2.64, 95% CI, 1.03–6.79) with no significant difference between the low Ab group and placebo (7.1% vs. 10.0%, p=0.43). 7-day point prevalence abstinence results show that subjects with high Ab levels were significantly more likely to abstain compared to placebo at 26 weeks (36.1% vs. 16.0%, p=0.0024, OR=3.30, 95% CI, 1.53–7.13) and 52 weeks (31.1% vs. 12.0%, p=0.0021, OR=3.69, 95% CI, 0.42–2.14). No significant differences were observed in the point prevalence abstinence rates between the low Ab group and placebo at 26 and 52 weeks (12.9% vs. 16.0%, p=0.51 and 11.4% vs. 12.0%, p=0.89, respectively). As shown in Figure 3A, abstinence rates remained essentially the same following the target quit date (TQD) for the duration of the study. To further validate the proof-of-concept, the relationship between abstinence during the final eight weeks of the study and anti-nicotine Ab concentrations (AUC) is shown in Figure 3B for all subjects receiving the vaccine. Continuous abstinence rates from week 45–52 are displayed for each 10th percentile increase in AUC. The proportion of abstinent subjects increased with increasing AUC percentile, and the ordered ranking maintained.\n\nBODY.RESULTS.PROOF-OF-CONCEPT.EFFECTS OF HIGH AB ON TIME TO CONTINUOUS ABSTINENCE:\nAn exploratory analysis examined the rate and time to continuous abstinence through study end (Figure 3C). Most smokers quit soon after the TQD, with a clear divergence of the high Ab group from the placebo and low Ab group. Among the 18 high Ab continuous abstainers, 12 initiated abstinence prior to the primary endpoint while 6 initiated their abstinence after the start of the primary endpoint. Furthermore, 3 of the 15 subjects who abstained during the primary endpoint relapsed by the end of the study. Cox proportional hazards analysis demonstrated superiority of the high Ab group compared to placebo (p=0.0069, hazard ratio of 2.76).\n\nBODY.RESULTS.PROOF-OF-CONCEPT.EVALUATION OF LONG-TERM ABSTINENCE:\nAs most subjects achieved abstinence shortly after their TQD, additional analyses were undertaken to evaluate prolonged abstinence to 6 and 12 months allowing for a 2 week grace period after the TQD [15]. (Prolonged abstinence is defined as not a single puff during the period from 2 weeks after the TQD for 20 weeks and 44 weeks, respectively). Prolonged abstinence rates to 6 months were significantly higher in the high Ab group vs. placebo (19.7% vs. 6.0%, p=0.0060, OR=4.41, 95% CI, 1.53–12.71) with no significant differences between the placebo and low Ab groups (7.9% vs. 6.0%, p=0.60). Subjects with high Ab were also significantly more likely to be abstinent for 12 months compared to placebo (18.0% vs. 6.0%, p=0.014; OR of 3.84; 95% CI, 1.32–11.20). The low Ab group did not differ significantly from placebo (7.1% vs. 6.0%, p=0.67).\n\nBODY.RESULTS.PROOF-OF-CONCEPT.CIGARETTE SMOKING IN NON-QUITTERS:\nStatistically significant differences were observed in reduction of daily cigarette consumption and cotinine between non-abstainers (weeks 19–52) with high Ab levels and non-abstainers in the placebo group (p=0.0015 and 0.019, respectively; see Figures 4A and 4C. The difference in the median reduction in cigarette consumption, following the TQD, between the high Ab non-abstainers and placebo non-abstainers was on average 4.6 cigarettes per day. Similarly, cotinine geometric mean concentrations (GMCs) were 19.0% lower on average following the target quit date in the high Ab non-abstainers as compared to placebo non-abstainers. Median cigarettes per day and cotinine GMC for placebo and low Ab group are nearly identical over the study period. Differences in mean CO were not observed across all three groups (see Figure 4B). 15 out of the 301 subjects smoked more than a 2-fold higher number of cigarettes following the TQD as compared to baseline with no significant difference between the placebo (n=4/100) and 3'AmNic-rEPA (n=11/201) groups. The highest smoking levels observed post-TQD were 5-fold baseline in the placebo and 4-fold baseline in the 3'AmNic-rEPA groups. Elevated smoking levels for 11 of these 15 subjects returned to below 2-fold baseline by study end. Subjects with remaining elevated levels were similar between the 3'AmNic-rEPA group (n=3/201) and placebo (n=1/100). Individual subjects with CO levels elevated by 2-fold or higher than baseline were also assessed and results were similar with no significant difference between the placebo (n=5/100) and 3'AmNic-rEPA (n=13/201) groups.\n\nBODY.RESULTS.PROOF-OF-CONCEPT.WITHDRAWAL SYMPTOMS:\nSignificant differences were not observed in overall withdrawal severity between placebo and the high and low Ab groups (p >0.22).\n\nBODY.RESULTS.EFFECTS OF DOSE AND SCHEDULE.IMMUNOGENICITY AND EFFICACY BY STUDY GROUP:\nFigure 5 depicts immune response by study group from baseline to week 52. Anti-nicotine Ab GMCs increased across all active treatment groups after each vaccination, with each subsequent dose resulting in a higher Ab response than the previous dose. Schedule 2 showed a higher initial increase in Ab concentrations. The 400 μg/Schedule 2 group demonstrated the highest Ab concentrations. However, no significant differences (p >0.05) were observed in AUC, Cmax and Cavg across the treatment groups to 26 or 52 weeks likely because the study was not powered to detect such difference. An intent-to-treat analysis demonstrated that the Schedule 2, 400 μg dose group had significantly higher prolonged abstinence to 6 months as compared to placebo (17.6% vs. 6.0%; p=0.015; OR of 4.14; 95% CI, 1.32–13.02), although not significant for the Schedule 2, 200 μg dose (14.0% vs. 6.0%, p=0.054; OR=3.23; 95% CI, 0.98–10.67) or between placebo and Schedule 1 for each of the doses (p >0.84). The Schedule 2, 400 μg dose group also had the highest rates of prolonged abstinence to 12-months, significantly higher than placebo (15.7% vs. 6.0%, p=0.038; OR=3.44; 95% CI, 1.07–11.04), but not significant for the Schedule 2, 200 μg dose (14.0% vs. 6.0% for 200 (p=0.056; OR=3.21; 95% CI, 0.97–10.63) or between placebo and Schedule 1 for each dose (p >0.88).\n\nBODY.RESULTS.EFFECTS OF DOSE AND SCHEDULE.SAFETY:\nTable 2 lists the number of subjects experiencing local and systemic reactogenicity. Reactogenicity events were aggregated over all injections. Overall, ache and tenderness were the most commonly reported local events, with at least one report by nearly all subjects (86% to 98%) in each treatment group. Myalgia, malaise, and headache were the most commonly reported systemic events (64% to 88% of subjects). Swelling, heat, burning, erythema, and nausea were reported by about half the subjects. Fever and vomiting were less common (4% to 16%). A total of 1184 treatment-emergent adverse events (AEs), predominantly rated mild or moderate, were reported by 266 of 301 subjects; 87.1% of 3'AmNic-rEPA recipients and 91.0% of placebo recipients. On average, 3.7 and 4.3 events were observed per person in the vaccinated and placebo groups, respectively, including subjects with no events. The distribution of 161 physician-determined treatment-related AEs, according to severity and relationship to treatment, was similar for the 3'AmNic-rEPA and placebo arms. Seven 3'AmNic-rEPA recipients (3.5%) and 2 subjects in the placebo arm (2.0%) withdrew from the study due to adverse events. 18 Serious AEs were reported: 8 events in the 3'AmNic-rEPA treatment groups among 7 subjects (3.5% of the 3'AmNic-rEPA recipients) and 10 events in the placebo group among 5 subjects (5.0% of the placebo recipients). Only one of these SAEs (anaphylactic reaction in a 3'AmNic-rEPA 400 μg/Schedule 2 recipient) was considered by the Investigator to be treatment related. This subject, who had a history of urticaria to penicillin and seasonal allergies, experienced difficulty breathing, throat tightness, facial erythema and urticaria 70 minutes after the initial vaccination. The subject was treated with a single injection of subcutaneous epinephrine and diphenhydramine, which resolved the symptoms. Herpes Zoster was reported in 6 subjects; one occurring within 3–5 hours after the first vaccination which would be impossible to link to vaccination. Of the remaining cases, 4 (2%) occurred in 3'AmNic-rEPA recipients; while 1 occurred in placebo (1%). In contrast, the related herpes virus, simplex infection was reported in 4 3'AmNic-rEPA recipients (1%) and 3 placebo recipients (2%).\n\nBODY.DISCUSSION:\nResults demonstrated the proof-of-concept that smokers who achieved higher anti-nicotine Ab concentrations were more likely to quit and remain abstinent from smoking. The high Ab group demonstrated the highest abstinence rates independent of the time period of ascertainment. Similarly, in a separate study conducted by Cornuz and coworkers [13] testing a different nicotine vaccine (nicotine derivative conjugated to a virus-like particle derived from bacteriophage Qβ), post hoc analysis showed subjects stratified to the highest Ab group had a significantly higher quit rate than placebo. However, unlike the current study, which used the intent-to-treat (ITT) population to establish proof of concept, the reported finding by Cornuz et al. [13] was observed after eliminating about a third of the subjects who used nicotine replacement therapies during the course of the study or who had incomplete Ab titer values. In the present 3'AmNic study, subjects in the high Ab group had observed odds ratios of 4.4 (95% CI 1.5–12.7) and 3.9 (95% CI 1.3–11.2) for prolonged abstinence rates to 6 and 12 months versus placebo. Although no direct comparisons can be made, these odds ratios are not unlike ones observed in the meta-analyses conducted for the U.S. Clinical Practice Guideline where the reported odds-ratios ranged from 1.5 (95% CI 1.2–1.7) for nicotine gum to 3.1 (95% CI 2.5–3.8) for varenicline at 6 months post-quit [3]. If current study findings are confirmed in larger studies, imunotherapeutics are likely to emerge as an important aid to smoking cessation. No significant compensatory smoking in response to the presence of anti-nicotine antibodies, as determined by the number of cigarettes smoked per day, carbon monoxide levels or cotinine levels, was observed in this study. This result is consistent with observations from other studies [12, 13]. In the current study, subjects in the high Ab group who did not abstain smoked significantly lower number of cigarettes (median reduction of ~5 cigarettes/day) and experienced lower cotinine levels (~20%) than placebo subjects along with no differences observed between the low Ab and placebo groups. However, a small number of individual subjects (N=15/301) smoked more than 2-fold the number of cigarettes at baseline with no significant differences between the active treatment and placebo groups. A major challenge for immunotherapeutics is to stimulate high Ab levels in the vast majority of smokers trying to quit. Vaccine dose and frequency of vaccination have an impact on the Ab levels attained. The 5 injection/400 μg dose was associated with the highest Ab response, although not statistically significant, possibly due to the small sample size. Importantly, this dose and schedule demonstrated statistically significant improved 44-week CAR compared to placebo. Because the 4 injection/400 μg dose was not associated with higher abstinence rates, this result demonstrates that consideration of both dose and schedule of injection are critical. In an independent, follow-up immunogenicity study to examine if peak Ab levels could be elevated, a total of 74 subjects received 6 injections of 400 μg 3'AmNic-rEPA at week 0, 4, 8, 12, 16, and 26. More than 80% of subjects receiving the 6-dose immunization regimen exceeded a target level of Ab (25 μg/ml) by week 14. In contrast, only 50% of subjects receiving 5 injections of the 400 μg dose achieved this level by week 14 in the current study and, only 7% of subjects attained this level by the target quit date. This finding suggests that more frequent injections and a later quit date may increase treatment success. In general, 3'AmNic-rEPA was well-tolerated. The frequencies of local and systemic vaccine reactogenicity events were similar in the vaccine and placebo groups and similar to licensed adult vaccines containing Alum adjuvant [16]. The slight increase in cases of herpes zoster observed in this study may be spurious, but continued monitoring is necessary to determine if a causal relationship exists. The occurrence of a single anaphylactic reaction would suggest a need for continued monitoring and follow-up, even considering the subjects' history of prior drug allergy to penicillin.1 In summary, results from this study support the concept that high anti-nicotine Ab levels are associated with higher rates of abstinence. These findings suggest that vaccines attaining high Ab levels by the target quit date may be more effective. Other future strategies may include examining additional ways to increase Ab levels across all individuals. Nonetheless, this study demonstrates that 3'AmNic-rEPA has significant potential as a smoking cessation, and perhaps a relapse prevention aid.\n\nBODY.METHODS.STUDY POPULATION:\nSmokers were recruited via advertisement across nine geographically diverse U.S. sites. Interested subjects were screened over the telephone and more extensively screened at the first Screening Visit. Subject informed consent was obtained prior to screening. Subjects were 18 years of age or older, smoked ≥15 cigarettes/day, had exhaled CO ≥10 ppm; wanted to quit smoking, and were in good general physical and mental health. Exclusion criteria included recent use of any medications or drugs that might interfere with immune response, interact with the vaccine, and pharmacotherapies or other treatments for smoking cessation. For females, a negative urine pregnancy test at enrollment and active use of acceptable birth control or documentation of surgical sterility was required.\n\nBODY.METHODS.STUDY DESIGN:\nThis Phase 2 study was a randomized double-blind, placebo-controlled, parallel-arm trial design (see Figure 1). Four treatment groups varied the dose and/or schedule of intramuscular vaccination: 200 or 400 μg of 3'AmNic-rEPA or placebo according to Schedule 1 (weeks 0, 6, 12, and 26) or Schedule 2 (weeks 0, 4, 8, 16, and 26). Subjects (N=150) were first randomized within Schedule 1 groups in a 1:1:1 ratio (200 μg: 400 μg: placebo) and then 151 subjects randomized within Schedule 2 groups in the same ratio. The TQD was 1 week after the second injection (end of week 7 for Schedule 1 and end of week 5 for Schedule 2). If the subjects relapsed (seven consecutive days of smoking) after the quit date, a second quit date coinciding with a future clinic visit was allowed between time of relapse and week 18. Cessation counseling (based on the USDHHS Clinical Practice Guidelines [17]) for the first quit attempt involved 5 standardized face-to-face sessions (≤10 minutes) and for the second quit attempt, a face-to-face session plus 2 post-quit telephone counseling sessions. Subjects were followed for 52 weeks after randomization and the first injection on Day 0 for a total of 21 visits. During the injection day, subjects remained at the study site for 30 to 60 minutes for observation and attended a visit 24 hours after each injection to assess side effects. Otherwise, visits ranged from weekly to bi-weekly and were less frequent after later injections. Subjects who failed to quit smoking on their TQDs were encouraged to remain in the study and continue to attempt to achieve abstinence. Subjects who terminated from the study were not replaced and presumed to be smokers. Institutional Review Board approval was obtained from all institutions involved in the study. A Data and Safety Monitoring Board (DSMB) met four times during the study.\n\nBODY.METHODS.INVESTIGATIONAL PRODUCT:\nThe active investigational product was purified 3'-aminomethylnicotine conjugated to P. aeruginosa r-exoprotein A. For the 200 and 400 μg/mL dose, each single-use syringe contained 3'-aminomethylnicotine conjugated to 200 or 400 μg rEPA, respectively, adsorbed to 1.2 mg aluminum hydroxide adjuvant (Alhydrogel 85) in 1 mL phosphate buffered saline (0.15 M NaCl, 0.002 M NaPO4, pH 7.2, 0.01% polysorbate 80; PBS). For the placebo dose, PBS with 1.2 mg Alhydrogel 85 was included in a 1 mL single-use syringe.\n\nBODY.METHODS.MEASURES:\nCigarette use was recorded daily on an electronic diary for 182 days, and then weekly for the remainder of the study. Exhaled CO and urine cotinine were measured at each study visit, except for visits within 24 hours of vaccination. Questionnaires were collected via electronic diary: Fagerström Test for Nicotine Dependence [18] (administered days 0, 210, 364), Minnesota Nicotine Withdrawal Scale [MNWS, 19] (administered weekly until Month 6) and data on other tobacco usage. Sera were collected for immunogenicity measurements at 16–17 time points (schedule-dependent) from baseline to week 52. Anti-nicotine antibody concentrations were measured using ELISA (Enzyme-linked Immunosorbent Assay) [12]. Subjects recorded standard local and systemic reactogenicity events for 7 days after each injection. All reactogenicity events were followed until resolution or study completion. Treatment emergent AEs were recorded until 4 weeks after the last dose, with the exception of SAEs, which were collected through week 52. Subjects were also periodically monitored at clinic visits for vital signs, weight, hematology, chemistry and urinalysis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Intent-to-Treat (ITT) population was used for evaluation of efficacy, safety and immunogenicity. The ITT population was defined as all subjects who were randomized to treatment. The primary endpoint was continuous smoking abstinence for a total of 8 weeks measured from the beginning of week 19 to the end of week 26 (determined from subject diaries confirmed by exhaled CO ≤8 ppm). The analysis for proof-of-concept stratified the active-treatment recipients into high (top 30% AUC from weeks 0 to 26) and low (bottom 70% AUC from weeks 0 to 26) Ab responders, regardless of treatment group. An a priori decision was made to establish the antibody cutoff between 50% and 25%. Top 30% by AUC group was selected as the largest group of high antibody responders between the 25% and 50% cutoffs that demonstrated statistical significance as compared to placebo. Smoking outcomes were compared between subjects with high Ab and pooled placebo recipients using logistic regression. Secondary aims of this study were to assess a) 7-day point prevalence abstinence at various times, CARs during 52 weeks and time to sustained abstinence defined as attaining 8 weeks of continuous abstinence at anytime prior to week 46 and maintaining continuous abstinence through 52 weeks; b) impact on compensatory smoking amongst non-abstainers; c) withdrawal symptoms and d) immunogenicity, efficacy and safety of administration of either 4 or 5 of the 200 and 400 μg doses. Secondary smoking cessation analyses used logistic regression for binary outcomes and Cox proportional hazards regression models and log-rank tests for time-to-sustained abstinence analyses. Mixed-effects repeated-measures analyses of the number of cigarettes smoked, CO and cotinine adjusted for baseline were utilized in assessing compensatory smoking among non-abstainers or in assessing withdrawal symptoms. Anti-nicotine Ab responses were summarized as GMC with 95% confidence intervals. Safety was assessed throughout this study primarily using reactogenicity and adverse events (AEs). Reactogenicity data for 7 days after each injection were tabulated, and the proportions of subjects with any post-vaccination reactogenicity, aggregated over all injections, among the five treatment groups were compared using the Generalized Cochran-Mantel-Haenszel test. When a subject dropped out of the study, they were assigned presumed to be smokers. Otherwise, all missing diary data related to cigarette use were imputed utilizing the Last Observation Carried Forward principle. No imputation was carried out for CO level. The missing serology data were imputed by first defining a set of injection windows for each schedule. The missing serology was imputed by using the next available measured serology in its corresponding window; if the next value was not available the value of the nearest previous time point in that window was used. The AUC for anti-nicotine Ab was calculated based on imputed data. For Ab values one week after the target quit date, the value at week 9 was used for Schedule 1 as sera were not collected at week 8.\n\n**Question:** Compared to placebo what was the result of high Ab group on Prolonged abstinence from smoking rates to 12 months?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
138
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Liraglutide, a once-daily human GLP-1 analogue, added to a sulphonylurea over 26 weeks produces greater improvements in glycaemic and weight control compared with adding rosiglitazone or placebo in subjects with Type 2 diabetes (LEAD-1 SU)\n\n ABSTRACT.AIM:\nTo compare the effects of combining liraglutide (0.6, 1.2 or 1.8 mg/day) or rosiglitazone 4 mg/day (all n ≥ 228) or placebo (n = 114) with glimepiride (2–4 mg/day) on glycaemic control, body weight and safety in Type 2 diabetes.\n\nABSTRACT.METHODS:\nIn total, 1041 adults (mean ± sd), age 56 ± 10 years, weight 82 ± 17 kg and glycated haemoglobin (HbA1c) 8.4 ± 1.0% at 116 sites in 21 countries were stratified based on previous oral glucose-lowering mono : combination therapies (30 : 70%) to participate in a five-arm, 26-week, double-dummy, randomized study.\n\nABSTRACT.RESULTS:\nLiraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%) or rosiglitazone (−0.4%, P < 0.0001, baseline 8.4%) when added to glimepiride. Liraglutide 0.6 mg was less effective (−0.6%, baseline 8.4%). Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l). Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). Changes in body weight with liraglutide 1.8 mg (−0.2 kg, baseline 83.0 kg), 1.2 mg (+0.3 kg, baseline 80.0 kg) or placebo (−0.1 kg, baseline 81.9 kg) were less than with rosiglitazone (+2.1 kg, P < 0.0001, baseline 80.6 kg). Main adverse events for all treatments were minor hypoglycaemia (< 10%), nausea (< 11%), vomiting (< 5%) and diarrhoea (< 8%).\n\nABSTRACT.CONCLUSIONS:\nLiraglutide added to glimepiride was well tolerated and provided improved glycaemic control and favourable weight profile.\n\nBODY.INTRODUCTION:\nMost drugs that target Type 2 diabetes (T2D) also cause weight gain or hypoglycaemia, or both, with the risk increasing with combination therapy. Glucagon-like peptide-1 (GLP-1)-based therapies stimulate insulin secretion and reduce glucagon secretion only during hyperglycaemia. GLP-1 also slows gastric emptying and reduces appetite [1]. Although American Diabetes Association (ADA)/European Association for the Study of Diabetes (EASD) guidelines recommend lifestyle and metformin as initial therapy for T2D [2], sulphonylureas are used widely, particularly when metformin or thiazolidinediones are not tolerated. Glycaemic control eventually deteriorates with sulphonylureas while hypoglycaemia and weight gain are common [3]. Incretin therapy improves glycaemic control with low hypoglycaemic risk, while delayed gastric emptying and reduced appetite can reduce weight [1,4]. Liraglutide is a once-daily human GLP-1 analogue with 97% linear amino-acid sequence homology to human GLP-1 [5] and half-life of 13 h after subcutaneous administration that produces 24-h blood glucose control [6]. Liraglutide monotherapy for 14 weeks reduced glycated haemoglobin (HbA1c) by 1.7% and fasting plasma glucose (FPG) by 3.4 mmol/l without causing hypoglycaemia, along with weight loss (∼3 kg) compared with placebo [7]. Improvements in pancreatic B-cell function [7–9] and blood pressure [7], along with decreased glucagon secretion [7,10], also occurred. As part of the phase 3 programme [the Liraglutide Effect and Action in Diabetes (LEAD) programme] with liraglutide in > 4000 subjects with T2D as monotherapy or in combination therapy, this 26-week trial examined liraglutide plus glimepiride compared with either placebo or rosiglitazone added to glimepiride on glycaemic control and body weight.\n\nBODY.SUBJECTS AND METHODS.STUDY PARTICIPANTS:\nInclusion criteria: T2D treated with oral glucose-lowering agents (OGLAs) for ≥ 3 months; 18–80 years of age; HbA1c 7.0–11.0% (previous OGLA monotherapy) or 7.0–10.0% (previous OGLA combination therapy); body mass index (BMI) ≤ 45.0 kg/m2. Exclusion criteria: used insulin within 3 months, impaired liver or renal function, uncontrolled hypertension (≥ 180/100 mmHg), cancer or used any drugs apart from OGLAs likely to affect glucose concentrations. Subjects provided written informed consent. The study was conducted in accordance with good clinical practice guidelines and approved by independent ethics committees.\n\nBODY.SUBJECTS AND METHODS.STUDY DESIGN:\nThe study was a 26-week, double-blind, double-dummy, randomized, active-control, five-armed parallel (116 sites in 21 countries, primarily Europe and Asia) trial enrolling 1041 subjects (1–37 subjects per centre), all receiving glimepiride (2–4 mg/day) in combination with (Fig. 1): FIGURE 1Overview of trial design and treatment arms. one of three liraglutide doses [0.6, 1.2 or 1.8 mg, injected subcutaneously (Novo Nordisk, Bagsvaerd, Denmark) and rosiglitazone placebo];liraglutide placebo and rosiglitazone placebo;liraglutide placebo and rosiglitazone 4 mg/day (rosiglitazone; AvandiaTM; GlaxoSmithKline, London, UK). The doses of rosiglitazone and glimepiride used were determined by the highest doses approved in all participating counties. After discontinuing previous OGLAs except glimepiride, separate 2-week titration and maintenance periods with glimepiride (open-label) preceded randomization (Fig. 1). Subjects were stratified according to previous treatment (monotherapy or combination therapy). After randomization, 2-week treatment titration and 24-week treatment (maintenance) phases (Fig. 1) were completed. Liraglutide was up-titrated weekly in 0.6-mg increments until allocated doses were reached. Glimepiride could be adjusted between 2 and 4 mg/day in case of hypoglycaemia or other adverse events (AEs), while other drug doses were fixed. Liraglutide (active and placebo) was supplied in 3-ml pre-filled pens with 31G needles (Novo Nordisk). Subjects were encouraged to inject liraglutide into the upper arm, thigh or abdomen at the same time each day. Rosiglitazone and glimepiride were taken in the morning or with the first meal.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.EFFICACY:\nThe primary endpoint was change from baseline HbA1c after 26 weeks of treatment. Secondary endpoints included: percentages of subjects reaching HbA1c (< 7.0%, ≤ 6.5%), FPG (5.0 to ≤ 7.2 mmol/l) and postprandial plasma glucose (PPG; 10.0 mmol/l) targets [11–13]; changes in body weight, FPG, mean PPG, indices of pancreatic B-cell function [pro-insulin : insulin ratio and homeostasis model assessment (HOMA)-B], HOMA-insulin resistance (HOMA-IR) and blood pressure (BP). HbA1c was measured centrally (MDS Pharma Services, King of Prussia, PA, USA) by high performance liquid chromatography while plasma glucose (PG) was self-measured using MediSense® glucose meters (Abbott Diagnostics Inc., Abbott Park, IL, USA). Insulin and C-peptide were measured by chemiluminescence, proinsulin by ELISA, while glucagon was measured in aprotinin-treated plasma by radioimmunoassay. The proinsulin : insulin ratio was calculated from fasting insulin and fasting proinsulin. HOMA-B and HOMA-IR were both calculated from FPG and fasting insulin. Samples measured centrally were collected and transported according to detailed procedures in the MDS Pharma Services manual. Samples stored at ambient temperature were shipped by courier to the central laboratory on the same day as collection, while frozen samples were shipped every 3 weeks.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.SAFETY:\nSafety variables included hypoglycaemic episodes based on PG levels (< 3.1 mmol/l), liraglutide antibodies including cross-reacting and neutralizing antibodies, tolerability (gastrointestinal complaints) and pulse. AEs, vital signs, electrocardiogram (ECG), biochemical and haematology measures including calcitonin were also monitored. Self-treated hypoglycaemic episodes were classified as minor, while those requiring third-party assistance were considered major. Serum antibodies against liraglutide were measured by radioimmunoprecipitation assay.\n\nBODY.SUBJECTS AND METHODS.STATISTICAL ANALYSES:\nAll efficacy and safety analyses were based on intent-to-treat criteria, defined as subjects who were exposed to ≥ 1 dose of trial product(s). Efficacy endpoints were analysed by ancova with treatment, country and previous glucose-lowering treatment as fixed effects and baseline values as covariates. Missing data were imputed by last observation carried forward (LOCF). Sample size calculations were based on predicted HbA1c and body weight after trial completion. As the three liraglutide + glimepiride groups were to be compared with both rosiglitazone + glimepiride and glimepiride monotherapy, two calculations were performed. These sample size calculations assumed a standard deviation of 1.2% of HbA1c, the non-inferiority/superiority margin vs. active control was set to 0.4% and the difference to detect (superiority vs. placebo) was set to 0.5%. For body weight, a coefficient of variation of 3% (based on phase 2a trials for liraglutide) and a difference to detect of 3% were assumed. A combined power (calculated as the product of the marginal powers for HbA1c and body weight) of at least 85% was required. These calculations indicated that at least 168 and 81 patients completing the study would be needed for the combination and glimepiride monotherapy groups, respectively. Assuming a drop-out rate of 25%, targets for randomization were 228 in each of the combination therapy groups and 114 in the placebo group (total n = 1026). To protect against Type 1 errors, HbA1c was analysed using hierarchical testing for descending doses of liraglutide. First, superiority of liraglutide 1.8 mg to placebo was tested and, only if superior to placebo, non-inferiority to rosiglitazone was tested. If non-inferiority was obtained, superiority to rosiglitazone for liraglutide 1.8 mg was tested and superiority to placebo for liraglutide 1.2 mg was tested. If superiority was confirmed, non-inferiority to rosiglitazone would be tested and so on (i.e. testing sequence was stopped when hypotheses could not be rejected). Superiority was concluded when upper limits of two-sided 95% confidence intervals (CIs) for treatment differences were below 0%; non-inferiority was concluded if these values were < 0.4%; for secondary endpoints, Type 1 errors were controlled by estimating simultaneous CIs using Dunnett's method. Proportions of subjects achieving HbA1c (HbA1c < 7.0%, and ≤ 6.5%) and FPG (5.0 ≤ FPG ≤ 7.2 mmol/l) targets [13] were compared between treatments using logistic regression with allocated treatment and baseline values as covariates. Chi-square analyses assessed differences in treatments for percentages of subjects achieving no, one, two or three PPG values < 10 mmol/l [13]. Hypoglycaemic episodes were analysed under the assumption that number per subject were negatively binomially distributed using a generalized linear model, including treatment and country as fixed effects. Other safety data were compared by descriptive statistics. Values for descriptive statistics are expressed as means ± sd, while ancova results are expressed as least square means ± SEM or with 95% CI unless otherwise noted. Significance levels were set to 5% for two-sided tests and 2.5% for one-sided tests.\n\nBODY.RESULTS.DISPOSITION AND DEMOGRAPHICS:\nThe treatment groups were well balanced (Table 1). Of 1712 subjects screened, 1041 were randomized and 1040 were exposed to trial drugs; 147 subjects (14.1%) withdrew (Fig. 2). Withdrawals were higher with placebo (27%) and rosiglitazone treatment (16%) compared with liraglutide 0.6 mg (11%), liraglutide 1.2 mg (14%) and liraglutide 1.8 mg (9%) treatment. Thirty-eight subjects (3.7%) withdrew as a result of AEs (Fig. 2). Table 1 Demographic characteristics of study participants Liraglutide 0.6 mg ( n = 233) Liraglutide 1.2 mg ( n = 228) Liraglutide 1.8 mg ( n = 234) Placebo ( n = 114) Rosiglitazone ( n = 232) Male : female (%) 54 : 46 45 : 55 53 : 47 47 : 53 47 : 53 Age (years) 55.7 ± 9.9 57.7 ± 9.0 55.6 ± 10.0 54.7 ± 10.0 56.0 ± 9.8 Duration of diabetes (years) 6.5 (4.0,10.2) 6.7 (4.0,10.7) 6.5 (3.7,10.5) 6.5 (4.5,10.6) 6.6 (4.3,10.7) Previous on mono : combi (%) 30 : 70 31 : 69 27 : 73 32 : 68 32 : 68 FPG (mmol/l) 10.0 ± 2.4 9.8 ± 2.7 9.7 ± 2.4 9.5 ± 2.0 9.9 ± 2.5 HbA 1c (%) 8.4 ± 1.0 8.5 ± 1.1 8.5 ± 0.9 8.4 ± 1.0 8.4 ± 1.0 Diabetic retinopathy (%) 17.2 14.9 12.0 13.2 16.4 Hypertension (%) 69.1 68.0 69.7 64.9 66.8 BMI (kg/m 2 ) 30.0 ± 5.0 29.8 ± 5.1 30.0 ± 5.1 30.3 ± 5.4 29.4 ± 4.8 Weight (kg) 82.6 ± 17.7 80.0 ± 17.1 83.0 ± 18.1 81.9 ± 17.1 80.6 ± 17.0 Systolic blood pressure (mmHg) 131 ± 16 133 ± 15 132 ± 16 131 ± 15.3 133 ± 15 Data are mean ± sd and percentages, except for duration of diabetes, where data are median, 25th and 75th percentile. BMI, body mass index; FPG, fasting plasma glucose; HbA 1c , glycated haemoglobin; mono : combi, previous treatment with either monotherapy or combination therapy; sd , standard deviation. FIGURE 2Flow of patients through the study.\n\nBODY.RESULTS.EFFICACY.HBA:\nHbA1c decreased rapidly with all doses of liraglutide when added to glimepiride compared with either rosiglitazone or placebo (i.e. glimepiride monotherapy), irrespective of previous therapy. The greatest decreases occurred with liraglutide 1.2 and 1.8 mg (Fig. 3a–c). After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). Liraglutide 0.6 mg was non-inferior to rosiglitazone. Rosiglitazone also was superior to placebo (P < 0.0001). FIGURE 3Mean glycated haemoglobin (HbA1c) by treatment and week (intent-to-treat population with last observation carried forward): (a) overall population; (b) previously on monotherapy; or (c) previously on combination therapy; (d) mean changes in HbA1c from baseline after 26 weeks of treatment. Keys: (a–c) liraglutide 0.6 mg: grey dotted line with squares; liraglutide 1.2 mg: black solid line with triangles; liraglutide 1.8 mg: black dotted line with squares; rosiglitazone: grey solid line with circles; placebo: black solid line with circles. (d) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. ****P < 0.0001 compared with placebo; ††††P < 0.0001 compared with rosiglitazone. HbA1c decreases were greater for subjects who entered from monotherapy compared with combination therapy (Fig. 3d). However, because the increase with placebo was higher for individuals entering on combination therapy (0.7 vs. 0.23%), the differences between treatment groups in favour of liraglutide were similar irrespective of whether subjects were treated previously with monotherapy or combination therapy. Neither age, gender nor BMI affected these trends.\n\nBODY.RESULTS.EFFICACY.PERCENTAGE REACHING AN HBA:\nThe percentage of subjects reaching ADA [2] and International Diabetes Federation (IDF)/American Association of Clinical Endocrinologists (AACE) [11,12] treatment HbA1c goals with liraglutide was dose dependent (Fig. 4). At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). FIGURE 4Subjects achieving specified glycated haemoglobin (HbA1c) levels: (a) percentage reaching HbA1c < 7.0% (American Diabetes Association/European Association for the Study of Diabetes target); (b) percentage reaching HbA1c < 6.5% (International Diabetes Federation/American Association of Clinical Endocrinologists targets); (c) cumulative distribution of HbA1c at 26 weeks for the intent-to-treat (ITT) population; and (d) for the ITT last observation carried forward (LOCF) population. Keys: (a, b) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. (c, d) liraglutide 0.6 mg: pale grey solid line; liraglutide 1.2 mg: grey solid line, liraglutide 1.8 mg: black solid line; rosiglitazone: dotted black line; placebo: dotted grey line; baseline visit: long dashed black line. ****P < 0.0001 or **P < 0.01 compared with placebo; ††††P < 0.0001 or †††P = 0.0005 compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.FASTING PLASMA GLUCOSE:\nBy week 2, subjects treated with liraglutide had rapid and larger decreases in FPG vs. comparator treatment. At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001), while only liraglutide 1.2 or 1.8 mg produced greater reductions than rosiglitazone. FPG treatment differences to placebo were 1.7 mmol/l for liraglutide 0.6 mg and 2.6 mmol/l for both liraglutide 1.2 and 1.8 mg. An 0.7-mmol/l greater reduction in FPG was achieved with either liraglutide 1.2 or 1.8 mg compared with rosiglitazone (P ≤ 0.006) after 26 weeks. FIGURE 5Mean changes from baseline in fasting plasma glucose after 26 weeks of treatment. ****P < 0.0001 compared with placebo; ††P < 0.01 compared with rosiglitazone. The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). The liraglutide 1.2 and 1.8 mg treatment groups also had more subjects achieving the same FPG target at end of treatment compared with rosiglitazone (26%) (P = 0.007 and P = 0.01, respectively).\n\nBODY.RESULTS.EFFICACY.POSTPRANDIAL PLASMA GLUCOSE:\nPPG was reduced similarly after each meal. The greatest reductions in mean PPG values from baseline (average of values obtained 90 min after breakfast, lunch and evening meal) occurred with liraglutide 1.2 mg (2.5 mmol/l) and liraglutide 1.8 mg (2.7 mmol/l). By comparison, the reduction from baseline in mean PPG values was 1.8 mmol/l for rosiglitazone and liraglutide 0.6 mg and 0.4 mmol/l for placebo. Treatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001) and greater with liraglutide 1.2 mg (0.64 mmol/l; P = 0.043) and 1.8 mg (0.87 mmol/l;P = 0.0022) compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.PPG MEASUREMENTS < 10.0 MMOL/L:\nThe percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.\n\nBODY.RESULTS.BODY WEIGHT:\nMean weight at baseline was 81.6 kg. Mean reductions in weight from baseline to end of treatment were 0.2 kg with liraglutide 1.8 mg and 0.1 kg with placebo treatment, while increases occurred with either liraglutide 0.6 mg (0.7 kg), liraglutide 1.2 mg (0.3 kg) or rosiglitazone (2.1 kg) (Fig. 6). Unlike rosiglitazone, weight did not increase substantially with liraglutide and the differences between rosiglitazone and liraglutide were statistically significant (−2.3 to −1.4 kg; P < 0.0001), although there were no significant differences compared with placebo. Gender appeared to have no influence on the results, as indicated when added as a fixed effect in the ancova model. FIGURE 6Mean changes in body weight from baseline after 26 weeks of treatment. *P < 0.05 compared with placebo; ††††P < 0.0001 compared with rosiglitazone.\n\nBODY.RESULTS.INDICES OF PANCREATIC B-CELL FUNCTION AND INSULIN RESISTANCE:\nReductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051). There were no significant differences between treatments for HOMA-IR. Table 2 Selected indices of pancreatic B-cell function Variable Treatment Baseline Week 26 (LOCF) Least square difference from placebo (95% CI) Least square difference from rosiglitazone (95% CI) Proinsulin : insulin ratio Liraglutide 0.6 mg 0.42 ± 0.22 0.38 ± 0.24 −0.05 (−0.11; 0.00) −0.02 (−0.06; 0.03) Liraglutide 1.2 mg 0.45 ± 0.31 0.33 ± 0.20 −0.10 (−0.16; −0.05) † −0.07 (−0.11; −0.02) * Liraglutide 1.8 mg 0.48 ± 0.33 0.36 ± 0.20 −0.09 (−0.15; −0.03) * −0.05 (−0.10; −0.01) * Placebo 0.44 ± 0.27 0.46 ± 0.29 Rosiglitazone 0.45 ± 0.29 0.40 ± 0.20 HOMA-B (%) Liraglutide 0.6 mg 51 ± 43.3 70 ± 88.6 15 (−19.10; 49.0) 11 (−16.7; 39.0) Liraglutide 1.2 mg 71 ± 254.3 99 ± 184.3 43 (8.10; 76.9) * 39 (10.3; 67.0) * Liraglutide 1.8 mg 56 ± 84.6 91 ± 108.2 34 (−0.23; 68.5) 30 (2.00; 58.6) * Placebo 56 ± 103.3 52 ± 107.3 Rosiglitazone 46 ± 36.2 59 ± 63.3 * P ≤ 0.05; † P < 0.0001. CI, confidence interval; HOMA, homeostatis model assessment; LOCF, last observation carried forward. \n\nBODY.RESULTS.BLOOD PRESSURE AND PULSE:\nAlthough decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. Pulse increases above baseline ranged from 2 to 4 beats/min with the three doses of liraglutide and 1 beat/min with rosiglitazone, while pulse decreased by 1 beat/min with placebo. Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).\n\nBODY.RESULTS.SAFETY:\nThe most common treatment-emergent AEs that were considered by investigators to be either possibly or probably related to liraglutide were gastrointestinal (diarrhoea, nausea, dyspepsia and constipation) and nervous system disorders (headache and dizziness), particularly during the first 4 weeks. Nausea was highest with liraglutide 1.2 mg (10.5%) and lowest with placebo (1.8%). Vomiting (4.4%) and diarrhoea (7.9%) were also higher with liraglutide 1.2 mg. Withdrawals because of nausea ranged from 0.9–2.2%, vomiting 0.4–0.9% and diarrhoea 0–1.3%. Nausea was more common with liraglutide compared with placebo and rosiglitazone, particularly during the first 4 weeks (Fig. 7). Frequency of nausea was less in the liraglutide 0.6 mg treatment group compared with the higher doses of liraglutide. Generally, the occurrence of nausea dissipated from 4 to 26 weeks of treatment in all groups using liraglutide (Fig. 7). FIGURE 7Percentage of subjects experiencing nausea over the course of the study. Key: liraglutide 0.6 mg with glimepiride: black line with filled circles; liraglutide 1.2 mg with glimepiride: black line with filled triangles; liraglutide 1.8 mg with glimepiride: grey line with hollow circles; glimepiride grey lines with filled squares; rosiglitazone and glimepiride: grey line with hollow triangles. The incidence of serious AEs ranged between 3 and 5%: placebo (3%), rosiglitazone (3%), liraglutide 0.6 mg (3%), liraglutide 1.2 mg (4%) and liraglutide 1.8 mg (5%). Most treatment-emergent serious AEs were judged by investigators to be unlikely to be related to trial products. No deaths were reported during the trial. One subject developed chronic pancreatitis whilst taking liraglutide 0.6 mg; the person had no reported previous history of pancreatitis. The subject continued on liraglutide therapy and completed the trial. At screening, five patients had been previously diagnosed with pancreatitis. As pancreatitis was not an exclusion criterion, these patients were randomized as follows: one to liraglutide 0.6 mg, one to liraglutide 1.2 mg, two to liraglutide 1.8 mg and one to rosiglitazone + glimepiride. All five patients completed the trial without reporting pancreatitis as an adverse event. Hypoglycaemia was infrequent with all treatments. One major hypoglycaemic episode (self-measured blood glucose = 3.0 mmol/l) occurred 9 days after treatment started in a subject receiving liraglutide 1.8 mg in combination with glimepiride. Although medical assistance was not needed, the subject required third-party assistance. The investigator judged the episode as likely to be related to glimepiride and reduced the dose from 4 to 3 mg after the incident. Minor hypoglycaemia occurred in < 10% of subjects for any treatment. The proportion of subjects experiencing minor hypoglycaemia during the trial was lowest with placebo (i.e. glimepiride monotherapy 2.6%; 0.17 events/subject-year), comparable with liraglutide 0.6 mg (5.2%, 0.17 events/subject-year) and rosiglitazone (4.3%, 0.12 events/subject-year) groups and similar between the liraglutide 1.2 mg (9.2%, 0.51 events/subject-year) and liraglutide 1.8 mg (8.1%, 0.47 events/subject-year) treatment groups. Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values. Antibodies to liraglutide were found in 9–13% of subjects treated with liraglutide. No significant effects of these antibodies on HbA1c were found in pooled analyses of four trials including the current study. There were no clinically relevant changes in ophthalmoscopy, biochemistry, urinalysis, haematology or ECG assessments. No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.\n\nBODY.DISCUSSION:\nTreatment with liraglutide plus glimepiride was superior to glimepiride monotherapy at all doses of liraglutide and superior to rosiglitazone plus glimepiride for the two higher liraglutide doses for improving HbA1c. Similar findings for reductions in FPG and PPG highlight improved 24-h glucose control with once-daily liraglutide, with substantially more subjects reaching glycaemic targets, particularly with liraglutide 1.8 mg. Improvements in pancreatic B-cell function were larger with liraglutide 1.2 and 1.8 mg compared with rosiglitazone. Liraglutide was well tolerated and occurrence of gastrointestinal AEs was low overall, particularly after week 4. Although rates of hypoglycaemia were low in all treatment groups (< 10%), minor hypoglycaemic events occurred more often in patients treated with glimepiride plus liraglutide 1.2 or 1.8 mg than with glimepiride alone. It should be noted, however, that patients treated with liraglutide 1.2 or 1.8 mg achieved a lower HbA1c than those receiving glimepiride monotherapy. At lower HbA1c levels, sulphonylureas are known to elicit hypoglycaemia more readily than at higher levels. In clinical practice it may be possible to reduce the dose of sulphonylurea (when used with liraglutide) to minimize risk of hypoglycaemia and maintain HbA1cimprovements. Although weight effects were modest, liraglutide produced more favourable weight effects compared with rosiglitazone, which produced substantial weight gain. In other studies with liraglutide, subjects adding a 1.8-mg dose to metformin lost 2.8 kg [14], while those adding both metformin and glimepiride lost 1.8 kg compared with placebo [15] (both over 26 weeks) and those on liraglutide monotherapy (1.8 mg) lost 2.45 kg over 52 weeks [16]. In our study, because sulphonylureas usually cause weight gain, inclusion or optimization of glimepiride but not metformin may have mitigated the weight benefits typically associated with liraglutide. Lack of weight effects could be secondary to lower baseline body weight, withdrawal of previous metformin treatment or defensive snacking to minimize risk of hypoglycaemia. It might have been expected that the greater weight gain with rosiglitazone compared with liraglutide 1.8 mg would be associated with a concurrent increase in insulin resistance with rosiglitazone. The absence of this effect could reflect the insulin-sensitizing nature of rosiglitazone. Improvements in pancreatic B-cell function associated with liraglutide are consistent with other studies [7–9]. Study strengths include inclusion of both placebo and active (rosiglitazone) comparators and that OGLAs were optimized (not maximized) before randomization to minimize risk of hypoglycaemia. Limitations of the study include short duration of the trial and restriction on glimepiride and rosiglitazone in some countries that precluded maximal dosing. The impact of using other GLP-1-based treatments [such as exenatide, or the dipeptidyl peptidase-4 (DPP-4) inhibitor, sitagliptin] with sulphonylureas in subjects with T2D has been studied. In a 30-week American trial where exenatide twice a day was added to sulphonylureas, HbA1c was reduced by 0.46% from baseline with 5 μg and 0.86% with 10 μg [17] compared with 1.1% with liraglutide 1.8 or 1.2 mg. This reduction in HbA1c with liraglutide is consistent with other LEAD trials investigating liraglutide as monotherapy or in combination with various OGLA drugs. In these trials, HbA1c was reduced by 1–1.5%[14,16,18–20]. Reductions in FPG with exenatide were 0.3 and 0.6 mmol/l from baseline with 5 μg and 10 μg, respectively, compared with 1.4 mmol/l with liraglutide 1.8 mg; weight loss of 1.6 kg occurred with exenatide 10 μg compared with 0.2 kg for liraglutide 1.8 mg [17]. Differences in weight effects may be as a result of lower baseline weight in this trial (82 kg) compared with exenatide (96 kg) and discontinuation of previous metformin therapy, unlike the exenatide trial where exenatide was added to previous sulphonylurea monotherapy [17]. Other large-scale trials with liraglutide in combination with sulphonylureas have demonstrated weight loss of 2–3 kg [18,20]. Withdrawals from exenatide trials ranged from 24–30% compared with 9–14% with liraglutide in this study. Nausea with exenatide ranged from 39% with 5 μg to 51% with 10 μg [17] compared with 10.5% for liraglutide. Furthermore, 41% were positive for anti-exenatide antibodies compared with 9–13% with anti-liraglutide antibodies. With sitagliptin 100 mg once daily for 24 weeks, HbA1c decreased by 0.3% from baseline in subjects receiving glimepiride, with 11% achieving an HbA1c < 7.0%[21]. Reductions in FPG and PPG from baseline were 0.05 and 1.4 mmol/l, respectively, while weight increased by 0.8 kg and the prevalence of nausea was < 1%. Although head-to-head trials are required to test true differences between these agents, the marked effects of liraglutide on FPG may be as a result of consistent blood levels of liraglutide maintained over 24 h compared with exenatide which has to be administered 60 min before breakfast and dinner and has a half-life of 1.5–3.6 h [22]. In a recent 26-week head-to-head trial comparing liraglutide with exenatide, liraglutide produced a 0.3% greater decrease on HbA1c (P < 0.0001) [20]. Because DPP-4 inhibitors inhibit the degradation of GLP-1, the efficacy of sitagliptin is dependent on levels of endogenous GLP-1 which is physiologically low compared with the much higher pharmacological levels of liraglutide. Pharmacological levels may be needed to induce satiety, weight loss and possibly larger HbA1c reductions. Liraglutide is an effective and well-tolerated once-daily human GLP-1 analogue that improves overall glycaemic control and indices of pancreatic B-cell function with minimal weight gain and risk of hypoglycaemia when used in combination with a sulphonylurea for T2D.\n\nBODY.COMPETING INTERESTS:\nThe study was funded by Novo Nordisk, the manufacturer of liraglutide. In collaboration with the investigators, Novo Nordisk was responsible for the study design, protocol, statistical analysis plans, oversight, analysis and reporting of the results. Data were recorded at the clinical centres and maintained by the sponsor. The LEAD-1 SU study group had full access to the data. Final responsibility for the decision to submit the manuscript for publication was the authors. MM has received lecture fees from Novo Nordisk, Servier, MSD; JS has received honoraria, grants and lecture fees from Novo Nordisk; MB, WMWB and NAK have no conflicts to declare; JS has received lecture fees from Novo Nordisk; MZ is employed by, and holds stock in, Novo Nordisk; TLT is employed by Novo Nordisk; SC is a member of the international advisory board on liraglutide for Novo Nordisk and has received lecture fees from Novo Nordisk.\n\n**Question:** Compared to Placebo plus glimepiride what was the result of Liraglutide (1.8 mg) plus glimepiride on HOMA-B?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
199
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Is the operative delivery rate in low-risk women dependent on the level of birth care? A randomised controlled trial\n\n ABSTRACT.OBJECTIVE:\nTo investigate possible differences in operative delivery rate among low-risk women, randomised to an alongside midwifery-led unit or to standard obstetric units within the same hospital.\n\nABSTRACT.DESIGN:\nRandomised controlled trial.\n\nABSTRACT.SETTING:\nDepartment of Obstetrics and Gynaecology, Østfold Hospital Trust, Tromsø, Norway.\n\nABSTRACT.POPULATION:\nA total of 1111 women assessed to be at low risk at onset of spontaneous labour.\n\nABSTRACT.METHODS:\nRandomisation into one of three birth units: the special unit; the normal unit; or the midwife-led unit.\n\nABSTRACT.MAIN OUTCOME MEASURES:\nTotal operative delivery rate, augmentation, pain relief, postpartum haemorrhage, sphincter injuries and intrapartum transfer, Apgar score <7 at 5 minutes, metabolic acidosis and transfer to neonatal intensive care unit.\n\nABSTRACT.RESULTS:\nThere were no significant differences in total operative deliveries between the three units: 16.3% in the midwife-led unit; 18.0% in the normal unit; and 18.8% in the special unit. There were no significant differences in postpartum haemorrhage, sphincter injuries or in neonatal outcomes. There were statistically significant differences in augmentation (midwife-led unit versus normal unit RR 0.73, 95% CI 0.59–0.89; midwife-led unit versus special unit RR 0.69, 95% CI 0.56–0.86), in epidural analgesia (midwife-led unit versus normal unit RR 0.68, 95% CI 0.52–0.90; midwife-led unit versus special unit RR 0.64, 95% CI 0.47–0.86) and in acupuncture (midwife-led unit versus normal unit RR 1.45, 95% CI 1.25–1.69; midwife-led unit versus special unit RR 1.45, 95% CI 1.22–1.73).\n\nABSTRACT.CONCLUSIONS:\nThe level of birth care does not significantly affect the rate of operative deliveries in low-risk women without any expressed preference for level of birth care.\n\nBODY.INTRODUCTION:\nOver the last few decades there has been an increasing trend towards the centralisation of childbirth in larger clinics in developed countries. As the level of available obstetric technology increases, the use of this technology increases as well, leaving researchers to suggest that low-risk women may receive excess interventions.1–4 Intervention rates for low-risk births might be higher than necessary, and there are large variations in inter-unit comparisons.5 In their intrapartum care guidelines, the UK's National Institute for Health and Clinical Excellence concluded that if a low-risk woman plans to give birth in a midwife-led unit she will have a higher likelihood of a normal birth with less intervention.6 As a counterbalance towards the trend of increased perinatal intervention, low-risk birth units or birth centres have been established. Low-risk birth units can either be freestanding, i.e. localised away from a hospital, or sit alongside, i.e. integrated within a hospital. These units are most often midwife led. Transfer from a low-risk birth unit to a standard care birth unit or hospital is required if medical services are necessary. Freestanding birth centres have been studied in different settings, concluding that birth centres are a safe alternative to hospital for low-risk women.7 It is also shown that general practitioners and midwives can identify a low-risk population that can deliver safely at maternity homes, with a low rate of operative deliveries and transfers.8 Freestanding, midwife-led birth centres report higher rates of normal births and lower rates of caesarean sections and episiotomies.9 Alongside birth units have also been studied widely. The Stockholm birth centre trial concludes that birth centre care is associated with less medical interventions, without statistically significant differences in health outcome.10 A Cochrane review on the topic concludes that an alternative birth setting versus conventional institutional birth setting is associated with reduced rates of medical interventions and increased maternal satisfaction, but states that there might be an increased risk for perinatal mortality.11 According to Gottvall et al.,12 there is no statistically significant difference in perinatal mortality between birth centres and standard care. A systematic review on low-risk units concludes that birth centres can offer the possibility of accessible, appropriate and personal maternity care for women and their families, but points to a strong need for randomised trials.13 Hatem et al.14 conclude in a Cochrane review that women who had midwife-led care were less likely to experience operative delivery, with no statistically significant differences in fetal or neonatal death overall. Studies reporting results from low-risk units often include participants early in pregnancy.10,15–18 This implies that a certain number of women included do not fulfil the selection criteria for midwife-led units at onset of labour, and therefore do not attend these units at all in labour. Following the important principle of intention to treat, the participants are still analysed according to the group they were originally allocated to.Waldenstrøm and Nilsson10 state that among women randomised to the midwife-led unit, 34% were transferred antepartum and 16% were transferred intrapartum. In this trial we wanted to study the effect of birth unit on birth outcome for low-risk women, and inclusion was therefore conducted at the onset of spontaneous labour. When searching for similar trials including women at onset of labour conducted in the last 20 years, only two randomised controlled trials were found: one from the USA and one from Hong Kong.19,20 Earlier data from several standard care obstetric departments in Norway show an operative delivery rate (caesareans, vacuum extractions and forceps deliveries) amongst low-risk women of ≥10%.21 At freestanding midwife-led units the operative delivery rate for the same group is approximately 5%.22 The aim of the present randomised controlled trial was to investigate if there were differences in operative delivery rates in low-risk women giving birth in an alongside, midwifery-led unit, compared with obstetric units. We hypothesised that it was possible to reduce the need for operative deliveries, with the same or better results for mother and child, if low-risk women were delivered in a separate low-risk unit.\n\nBODY.METHODS:\nIn 1999 the Norwegian Parliament decided to organise national birth care into three levels. Departments of obstetric and gynaecology with more than 1500 births per year, providing all birth care services with obstetricians, paediatricians and anaesthesiologists on duty at all times, and with a neonatal intensive care unit.Smaller obstetrical departments with 400–1500 births per year, providing low-risk birth care with obstetricians and anaesthesiologist on call.Midwife-led maternity homes with 40–400 births per year, providing birth care for healthy women with expected normal births. The Norwegian Parliament also advised obstetric departments to have low-risk units within hospitals.23 Therefore, the Department of Obstetrics and Gynaecology at Østfold Hospital Trust, with approximately 3000 births per year, was divided into three separate units, placed on separate floors, in 2004: The midwife-led unit (MU), the normal unit (NU) and the special unit (SU). The MU is organised for low-risk women with expected normal births who want as little intervention as possible. Restrictive selection criteria must be fulfilled to attend this unit. No epidural is offered nor augmentation, unless required for the second phase of the second stage. If extended surveillance is needed or if the birth needs to be taken over by an obstetrician, the woman will be transferred to either the NU or the SU. Obstetricians are not present at the unit unless called on for a specific reason. The NU is organised for women with expected normal births. The unit has access to extended surveillance, epidural and operative vaginal delivery. It also provides room for women with elective caesareans and inductions after spontaneous rupture of membranes. If extended surveillance is necessary throughout the birth at the NU, a transfer to the SU is not required. The SU is organised for women who are in need of extended surveillance in the antenatal period, during labour and after birth. Women expecting normal births may give birth at any of the three units, but at the MU only low-risk women are accepted. The MU has approximately 600 births annually, and the other two units have approximately 1200 births each. Each unit has its own separate staff, and midwives are responsible for all normal deliveries. All units provide both birth and postpartum care. To explore our hypothesis a randomised trial was carried out. The primary outcome was operative delivery rates. Secondary outcomes were: augmentation of labour, pain relief, and postpartum haemorrhage, and neonatal outcomes measured by an Apgar score <7 at 5 minutes, metabolic acidosis defined as an umbilical artery pH <7.05 and BE (Base Excess) <−12 mmol/l,24 and transfers to the neonatal intensive care unit (NICU). Information about the trial was sent to all women planning to give birth at Østfold Hospital Trust when being called for a routine ultrasound examination. At the routine ultrasound examination at 18–20 weeks of gestation, all women roughly suited for the trial received additional written and verbal information about the trial. If eligible for the trial and willing to participate, she was recruited for the trial. If she fulfilled the inclusion criteria at the onset of spontaneous labour, she was randomised to one of the three units. The inclusion criteria for this study were similar to the selection criteria at the MU. Healthy, low-risk women without any disease known to influence the pregnancy, one fetus in cephalic presentation, a pre-pregnant body mass index (BMI) ≤32, not smoking more than ten cigarettes per day, no prior operation on the uterus, no prior complicated deliveries, and spontaneous onset of labour between 36+1 and 41+6 weeks of gestation. Written informed consent was obtained from all study participants. All 10 902 women who gave birth at the Østfold Hospital Trust during the study period were given written information on the trial when invited to ultrasound screening at 18 weeks of gestation. As the trial includes only healthy women, a certain number were excluded according to the inclusion criteria. Of the 2884 possible candidates assessed as being both eligible and willing to participate, 1773 did not meet the inclusion criteria by the time of onset of spontaneous labour for the following reasons: no longer considered to be at low risk because of pre-eclampsia, placenta praevia, intrauterine growth retardation, breech presentation, haemorrhage in third trimester, and pre- and post-term pregnancies and inductions (n = 697); changed their minds about participating (n = 300); the study was paused during summer and Christmas vacations because the MU was closed (n = 254); and for other reasons (n = 522). This led to a number of 1111 participants (Figure 1). Figure 1Flowchart of recruiting and inclusion process. The randomisation process was performed through a digital randomisation database developed by the Clinical Research Unit at the University Hospital of North Norway. The midwife who administered the randomisation entered the women's name and checked for eligibility before receiving the randomisation number and unit from the database. Allocation was concealed and the randomisation stratified between primiparous (para 0) and multiparous (para 1+) women (Table 1). As the SU serves women with extended needs, their capacity to receive low-risk women is limited. Because of this, randomisation was pre-specified to allocate 37.5, 37.5 and 25.0% to the NU, MU and SU, respectively. Table 1 Some basic characteristics of the participants Variable Midwife-led unit n = 282 (%) Normal unit n = 417 (%) Special unit n = 412 (%) Parity Nulliparous(P0) 278 (67.5) 285 (68.3) 184 (65.2) Multiparous(P+) 134 (32.5) 132 (31.7) 98 (35.4) Education Primary school 20 (4.9) 25 (6.0) 23 (8.2) High school 182 (44.2) 168 (40.3 112 (39.7) College/university 202 (49.0) 218 (52.3) 139 (49.3) Unknown 8 (1.9) 6 (1.4) 8 (2.8) Age <25 years 103 (25.0) 100 (24.0) 64 (22.7) 25–35 years 263 (63.8) 270 (64.7) 181 (64.2) >35 years 46 (11.2) 47 (11.3) 37 (13.1) Social status Married 155 (37.6) 165 (39.6) 120 (42.6) Cohabiting 236 (57.3) 229 (54.9) 152 (53.9) Single 19 (4.6) 20 (4.8) 9 (3.2) Unknown 2 (0.5) 3 (0.7) 1 (0.4) \n\nBODY.METHODS.DOCUMENTATION PROCESS:\nAll data were registered by the midwife in charge in the electronic journal system of the department, partus (Clinsoft®), as is routine for all births. A midwife at each unit monitored the entries and was responsible for the documentation in connection with the trial. As a third and last documentation control, all the participants' data were checked by a midwife not working at any of the three units.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nTo detect a statistically significant reduction in operative delivery rate for low-risk women, from an estimated >10% in standard care units to approximately 5%, which is closer to the estimated rate in freestanding birth units, a power calculation was conducted. With a power of 80% and a probability of P < 0.05, one would have to include 1642 low-risk women. The inclusion process proceeded slower than expected, and unfortunately the funding was running out. Hence the inclusion stopped the first week of March 2010, including just 1111 participants in the trial. All data were analysed by the principle of 'intention to treat'. Analyses presenting differences between the three units were performed by chi-squared tests, and Pearson's two-sided asymptomatic significance level P values were calculated. The MU was set as the reference unit, and all primary and secondary outcomes of this unit were compared with the outcomes of the NU and SU. The statistician who performed the statistical analysis was blinded to the participants' affiliation to the groups. Each result is presented with a risk ratio (RR) and a 95% confidence interval (95% CI). The analysis was conducted in statistical product and service solutions (SPSS) 17.\n\nBODY.RESULTS:\nOf the 1111 participants in this trial, 67.2% were primiparous and 32.8% were multiparous. Table 1 shows the baseline characteristics of the participants.\n\nBODY.RESULTS.MODE OF DELIVERY:\nThere was no statistically significant difference in mode of delivery between the three birth-care units (Table 2). At the MU, the total operative delivery rate was 16.3%, with 23.4% for primiparas. At the NU these figures were 18.0 and 25.6%, respectively, and at the SU the rates were 18.8 and 27.7% (Table 3). Table 2 Relative risk (RR) assessments, with the MU set as the reference Variable MU vs NU RR (95% CI) MU vs SU RR (95% CI) Operative delivery 0.90 (0.67–1.22) 0.87 (0.62–1.20) Operative vaginal delivery 0.85 (0.58–1.25) 0.98 (0.65–1.52) caesarean section 1.01 (0.58–1.75) 0.71 (0.41–1.24) Dystocia * 0.79 (0.65–0.96) 0.72 (0.59–0.89) Oxytocin augmentation 0.71 (0.58–0.87) 0.69 (0.55–0.86) Epidural 0.68 (0.51–0.90) 0.64 (0.47–0.86) N 2 O 0.99 (0.90–1.09) 0.92 (0.83–1.02) Acupuncture for pain relief 1.45 (1.25–1.69) 1.45 (1.22–1.73) Postpartum haemorrhage >1000 0.79 (0.30–2.09 0.59 (0.20–1.41) Episiotomy of all vaginal 0.85 (0.66–1.09) 0.78 (0.60–1.02) Third-or fourth-degree tear of all vaginal deliveries 0.56 (0.19–1.66) 0.67 (0.20–2.28) Apgar score <7 at 5 minutes 0.68 (0.19–2.37) 2.74 (0.31–24.37) Metabolic acidocis ** 0.78 (0.25–2.42) 1.10 (0.30–4.0) Transfers to NICU *** 1.25 (0.76–2.05) 1.15 (0.67–1.99) * Midwife or doctor recorded labour dystocia, according to the hospital criteria. ** Metabolic acidosis: sample taken from umbilical cord showing arterial pH <7.05 and BE <−12 mmol/l. *** Transfer of newborn to NICU within the first 2 hours postpartum. Table 3 Birth outcome within the first 2 hours postpartum at all three birth care units Variable MU n = 412 (%) NU n = 417 (%) SU n = 282 (%) P Mode of delivery Total number of spontaneous deliveries 345 (84.0) 342 (82.0) 229 (81.0) ns Total number of operative deliveries 67 (16.0) 75 (18.0) 53 (18.8) ns Number of operative deliveries (P0) 65/278 (23.4) 73/285 (25.6) 51/184 (27.7) ns Number of operative deliveries (P+) 2/134 (1.5) 2/132 (1.5) 2/98 (2.0) ns Indication for operative delivery Labour dystocia 39 (58.2) 31 (41.3) 32 (60.4) ns Fetal distress 19 (28.4) 26 (34.7) 13 (24.5) ns Total number of operative vaginal deliveries 43 (10.0) 51 (12.0) 30 (11.0) ns Number of operative vaginal deliveries (P0) 42/278 (15.1) 49/285 (17.2) 29/184 (15.8) ns Number of operative vaginal deliveries (P+) 1/134 (0.7) 2/132 (0.7) 1/98 (1.0) ns Indication for operative vaginal delivery Labour dystocia 26 (60.5) 23 (45.0) 21 (70.0) ns Fetal distress 14 (32.6) 20 (39.2) 9 (30.0) ns Total number of caesarean sections 24 (6.0) 24 (6.0) 23 (8.0) ns Number of caesarean sections (P0) 23/278 (8.3) 24/285 (8.4) 22/184 (12.0) ns Number of caesarean sections (P+) 1/134 (0.7) 0/132 (0.0) 1/98 (1.0) ns Indication for caesarean section Labour dystocia 13 (54.2) 8 (33.3) 11 (47.8) ns Fetal distress 5 (20.8) 6 (25.0) 4 (17.4) ns Labour * 120 (29.0) 154 (37.0) 114 (40.0) <0.01 Oxytocin augmentation 108 (26.2) 153 (36.7) 107 (38.0) <0.01 Epidural 65 (16.0) 97 (23.0) 70 (25.0) <0.01 N 2 O 270 (66.0) 275 (66.0) 201 (71.0) ns Acupuncture for pain relief 227 (55.0) 158 (38.0) 107 (38.0) <0.001 Postpartum haemhorrage >1000 ml 7 (1.7) 9 (2.2) 9 (3.2) ns 500–999 ml 33 (8.0) 38 (9.0) 36 (13.0) ns 1000–1500 ml 4 (1.0) 6 (1.0) 3 (3.0) ns >1500 ml 3 (1.0) 3 (1.0) 6 (2.0) ns Episiotomy, of all vaginal deliveries 88/388 (23.0) 105/393 (27.0) 75/259 (29.0) ns Third- or fourth-degree tear, all vaginal deliveries 5 (1.0) 9 (2.0) 5 (2.0) ns Intrapartum transfer ** 117 (28.0) Apgar score <7 at 5 minutes 4 (1.0) 6 (1.0) 1 (0.5) ns Metabolic acidocis *** 5 (2.0) 7 (3.0) 4 (2.0) ns Transfers to NICU **** 32 (8.0) 26 (6.0) 19 (7.0) ns * Midwife or doctor recorded labour dystocia, according to the hospitals criteria. ** Intrapartum transfer from the MU. *** Metabolic acidosis: sample from umbilical cord showing arterial pH <7.05 and BE (Base Excess) <−12 mmol/l. **** Transfer of newborn to NICU within the first 2 hours postpartum. ns, not significant; P0, primiparous; P+, multiparous. Of all 24 women delivered by caesarean section at the MU, 23 were primiparous. The main reason for the intervention was dystocia (54.2%). At the NU there were 24 caesarean sections, all of them among nulliparous women. Dystocia was the reason for 33.3% of the operations at this unit. At the SU, one multiparous and 22 nulliparous women were delivered by caesarean section, and the main reason given for this intervention was dystocia (47.8%) (Table 3). Regarding operative vaginal delivery at the MU, 42 out of 43 deliveries were for primiparous women, and the main indication was dystocia. Of the women having an operative vaginal delivery at the NU, only two out of the 49 were multiparous, and the main reason for intervention was dystocia. At the SU, one of the 30 women who had an operative vaginal delivery was multiparous, and dystocia was the main indication for the interventions (Table 3).\n\nBODY.RESULTS.MODE OF DELIVERY.PERINEAL OUTCOME:\nThere was no significant difference between the three groups concerning the number of episiotomies or the incidence of sphincter injuries (Table 2). An episiotomy was performed on 22.7, 26.7 and 29.0% of the women in the MU, NU and SU, respectively (Table 3). A sphincter injury occurred in 1.3, 2.3 and 1.9% of the vaginal deliveries at the MU, NU and SU, respectively (Table 3). Of the five women with a sphincter injury at the MU, none had an episiotomy, one had an operative vaginal delivery and four had spontaneous delivery. Of the nine women with sphincter injuries at the NU, four had both an episiotomy and operative vaginal delivery, one had spontaneous delivery with episiotomy and four had spontaneous delivery with no episiotomy. At the SU, five women had sphincter injuries: three had an operative vaginal delivery and episiotomy, one had an episiotomy and spontaneous delivery, and one had spontaneous delivery and no episiotomy.\n\nBODY.RESULTS.MODE OF DELIVERY.LABOUR DYSTOCIA:\nLabour dystocia was evaluated and recorded by the midwives or the doctors. According to the hospital guidelines, dystocia is defined as progression of <1 cm dilatation of the cervix per hour in the active phase of the first stage (defined as 3–4 cm dilatation of the cervix and regular contractions until a cervix dilatation of 10 cm). Dystocia in the second stage is recorded if the expulsion phase lasts more than 60 minutes for both nulliparous and multiparous women. Dystocia in the second stage is also recorded if the second stage lasts longer than 2 hours for nulliparous women without epidural or multiparous women with epidural, or more than 3 hours for nulliparous women with epidural or more than 60 minutes for multiparous women without epidural. In the MU dystocia was recorded in 29.1% of the cases, which is a significantly lower rate than 36.9% in the NU (RR 0.79, 95% CI 0.65–0.96) and 40.4% in the SU (RR 0.72, 95% CI 0.59–0.89) (Table 2). Of all women allocated to the MU, 26.2% were given oxytocin infusion for augmentation of labour, which was significantly lower than 36.7% in the NU (RR 0.73, 95% CI 0.59–0.89) and 38.0% in the SU (RR 0.69, 95% CI 0.56–0.86) (Table 2). Labour dystocia was the main reason for all operative deliveries (Table 3). The mean time for the active phase of the first stage was 4.9, 4.6 and 4.8 hours in the MU, NU and SU, respectively. The mean time for the expulsion phase of all vaginal deliveries was 40.4, 39.2 and 40.1 minutes in the MU, NU and SU, respectively.\n\nBODY.RESULTS.MODE OF DELIVERY.PAIN RELIEF:\nOf all women randomised to the MU, 15.8% had an epidural, which is a significantly lower rate than 23.3% in the NU (RR 1.47, 95% CI 1.11–1.96) and 24.8% in the SU (RR 1.57, 95% CI 1.16–2.13). The women randomised to the MU had acupuncture in 55.1% of the cases, a significantly higher rate compared with those randomised to the NU (37.9%; RR 1.45, 95% CI 1.25–1.69) and SU (37.9%; RR 1.45, 95% CI 1.22–1.73) (Table 2).\n\nBODY.RESULTS.MODE OF DELIVERY.HAEMORRHAGE:\nThere was no statistically significant difference in the rate of postpartum haemorrhage between the three units (Table 2). The vast majority of all participants had normal postpartum haemorrhage of 500 ml or less (MU 90.3%, NU 88.7% and SU 84.0%; Table 3). Of the 25 women with a haemorrhage of 1000 ml or more, 17 were caused by atonic postpartum haemorrhage (five operative vaginal deliveries, 11 spontaneous deliveries and one caesarean section), three were caused by a retained placenta (two operative vaginal deliveries and one spontaneous delivery) and for five women no indication was stated.\n\nBODY.RESULTS.MODE OF DELIVERY.NEONATAL OUTCOMES:\nNeonatal outcomes were evaluated by Apgar score <7 at 5 minutes, metabolic acidosis and transfer to NICU within 2 hours of birth. An Apgar score <7 at 5 minutes was observed in 1.0, 1.4 and 0.4% of cases in the MU, NU and SU, respectively. An umbilical cord pH test was taken in 57.7, 68.8 and 77.3% of the cases in the MU, NU and SU, respectively. Metabolic acidosis was stated in 2.2, 2.8 and 2.0% in the MU, NU and SU, respectively. Transfers to the NICU were conducted in 7.8, 6.2 and 6.7% of the cases in the MU, NU and SU, respectively (Table 3). None of these outcomes showed a statistically significant difference between the units (Table 2).\n\nBODY.RESULTS.MODE OF DELIVERY.INTRAPARTUM TRANSFER:\nOf the 412 women randomised to the MU, 117 (28.4%) were transferred intrapartum to a higher level of care, either to the NU or SU (Table 3). The reasons for transfer were need for pain relief (39.3%), stained amniotic fluid (18.8%), fetal distress (9.4%), labour dystocia (23.9%) and other reasons (8.5%). Mean dilatation of the cervix was 6.4 cm at the time of transfer; 51% were transferred with a cervix dilatation of <7 cm. Of all women transferred intrapartum, 61 (52.0%) had an operative delivery, and among these, 39.3% were delivered by caesarean section and 60.7% were delivered by operative vaginal delivery. Of those transferred for labour dystocia, 60.8% had an operative delivery and 39.2% had a spontaneous delivery.\n\nBODY.DISCUSSION:\nIn this randomised controlled trial three birth units within the same hospital were compared concerning birth care for low-risk women. The results when including 1111 participants showed no statistically significant differences in the total operative delivery rate, nor did it show differences in postpartum haemorrhage or neonatal outcomes between the three units. Operative delivery rate is often used as a measure of the quality of birth care,7–11,14,16,19,20,22,25,26 but it is a subtle way of measuring quality, as it predicts poor quality if the operative delivery rate is low but gives a negative outcome, or if the rate is high without improving the outcome or even increasing the complications for the mother or newborn, yet it predicts good quality if performed when needed. Finding the right level or percentage of operative delivery will always be a challenge. Operative delivery rates differ between countries and institutions. Data from the Medical Birth Registry of Norway (MBRN) from 2008 show an average total operative delivery rate for low-risk women of approximately 13.8%, but varies between institutions. It is worth noting that the birth population in Norway 2006–2009 consisted of 42.2% primiparous and 57.9% multiparous women.21 During the study period the Department of Obstetrics and Gynaecology at Østfold Hospital Trust had a high overall operative delivery rate (29.2%) compared with most hospitals in Norway.21 This fact is reflected in the high numbers of caesarean deliveries in this study in all three birth units. The overall high risk of having an operative delivery for low-risk nulliparas with no expressed preferences for level of birth care, leave birth attendants with the challenge of focusing on low-risk primiparous women, guiding them safely through their first labour. Statistically significant differences were found for dystocia and augmentation of labour by oxytocin, and the use of epidural and acupuncture as pain relief. Moen et al.27 studied augmentation for all low-risk women in a retrospective study conducted at a large hospital in Norway. They found that low-risk women were given oxytocin in 39% of the cases (62% of the primiparous and 24% of the multiparous women), many of them without any documented indication. In a debate article in the BMJ in 2002, the authors state that medical interventions have become routine in normal childbirth, without evidence of effectiveness.4 This view is supported by others.2 A strength of this trial is the time for randomisation when comparing intrapartum birth care and birth outcome in low-risk birth units and standard care units. All participants were defined as low risk when entering the trial at onset of spontaneous labour, making sure that only those fulfilling the selection criteria were included. As far as we know no similar trial has been conducted in Europe during the last two decades. A possible limitation of this trial is the fact that the number of women included was less than estimated by the power calculations, based on the primary outcome: operative delivery. This also might be the reason for the wide confidence interval for the primary outcome. However, the differences between the three units were so small that even if the total number of participants were included, it is considered unlikely that the differences would be significant. Only small non-significant differences in total operative delivery rate were found (total operative delivery rate, MU versus SU P= 0.57 and MU versus SU P= 0.44). There is a challenge in recruiting participants to studies like this because of the fact that women today often have their own preference for place of birth.28 This fact led to a longer recruiting period than expected in this trial.\n\nBODY.CONCLUSION:\nThe operative delivery rate, the risk of having a postpartum haemorrhage of more than 1000 ml and the outcome for the newborn were not affected by the level of care for low-risk women without prelabour preferences for level of care. The participants randomised to the MU had a significantly higher chance of giving birth without interventions like augmentation by oxytocin or epidural analgesia. Further research is needed to determine the influence of women's own preference for birth care unit.\n\n**Question:** Compared to Normal unit & Special unit what was the result of Midwife-led unit on Neonatal outcomes were evaluated by Apgar score <7 at 5 minutes, metabolic acidosis and transfer to NICU within 2 hours of birth?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
119
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Liraglutide, a once-daily human GLP-1 analogue, added to a sulphonylurea over 26 weeks produces greater improvements in glycaemic and weight control compared with adding rosiglitazone or placebo in subjects with Type 2 diabetes (LEAD-1 SU)\n\n ABSTRACT.AIM:\nTo compare the effects of combining liraglutide (0.6, 1.2 or 1.8 mg/day) or rosiglitazone 4 mg/day (all n ≥ 228) or placebo (n = 114) with glimepiride (2–4 mg/day) on glycaemic control, body weight and safety in Type 2 diabetes.\n\nABSTRACT.METHODS:\nIn total, 1041 adults (mean ± sd), age 56 ± 10 years, weight 82 ± 17 kg and glycated haemoglobin (HbA1c) 8.4 ± 1.0% at 116 sites in 21 countries were stratified based on previous oral glucose-lowering mono : combination therapies (30 : 70%) to participate in a five-arm, 26-week, double-dummy, randomized study.\n\nABSTRACT.RESULTS:\nLiraglutide (1.2 or 1.8 mg) produced greater reductions in HbA1c from baseline, (−1.1%, baseline 8.5%) compared with placebo (+0.2%, P < 0.0001, baseline 8.4%) or rosiglitazone (−0.4%, P < 0.0001, baseline 8.4%) when added to glimepiride. Liraglutide 0.6 mg was less effective (−0.6%, baseline 8.4%). Fasting plasma glucose decreased by week 2, with a 1.6 mmol/l decrease from baseline at week 26 with liraglutide 1.2 mg (baseline 9.8 mmol/l) or 1.8 mg (baseline 9.7 mmol/l) compared with a 0.9 mmol/l increase (placebo, P < 0.0001, baseline 9.5 mmol/l) or 1.0 mmol/l decrease (rosiglitazone, P < 0.006, baseline 9.9 mmol/l). Decreases in postprandial plasma glucose from baseline were greater with liraglutide 1.2 or 1.8 mg [−2.5 to −2.7 mmol/l (baseline 12.9 mmol/l for both)] compared with placebo (−0.4 mmol/l, P < 0.0001, baseline 12.7 mmol/l) or rosiglitazone (−1.8 mmol/l, P < 0.05, baseline 13.0 mmol/l). Changes in body weight with liraglutide 1.8 mg (−0.2 kg, baseline 83.0 kg), 1.2 mg (+0.3 kg, baseline 80.0 kg) or placebo (−0.1 kg, baseline 81.9 kg) were less than with rosiglitazone (+2.1 kg, P < 0.0001, baseline 80.6 kg). Main adverse events for all treatments were minor hypoglycaemia (< 10%), nausea (< 11%), vomiting (< 5%) and diarrhoea (< 8%).\n\nABSTRACT.CONCLUSIONS:\nLiraglutide added to glimepiride was well tolerated and provided improved glycaemic control and favourable weight profile.\n\nBODY.INTRODUCTION:\nMost drugs that target Type 2 diabetes (T2D) also cause weight gain or hypoglycaemia, or both, with the risk increasing with combination therapy. Glucagon-like peptide-1 (GLP-1)-based therapies stimulate insulin secretion and reduce glucagon secretion only during hyperglycaemia. GLP-1 also slows gastric emptying and reduces appetite [1]. Although American Diabetes Association (ADA)/European Association for the Study of Diabetes (EASD) guidelines recommend lifestyle and metformin as initial therapy for T2D [2], sulphonylureas are used widely, particularly when metformin or thiazolidinediones are not tolerated. Glycaemic control eventually deteriorates with sulphonylureas while hypoglycaemia and weight gain are common [3]. Incretin therapy improves glycaemic control with low hypoglycaemic risk, while delayed gastric emptying and reduced appetite can reduce weight [1,4]. Liraglutide is a once-daily human GLP-1 analogue with 97% linear amino-acid sequence homology to human GLP-1 [5] and half-life of 13 h after subcutaneous administration that produces 24-h blood glucose control [6]. Liraglutide monotherapy for 14 weeks reduced glycated haemoglobin (HbA1c) by 1.7% and fasting plasma glucose (FPG) by 3.4 mmol/l without causing hypoglycaemia, along with weight loss (∼3 kg) compared with placebo [7]. Improvements in pancreatic B-cell function [7–9] and blood pressure [7], along with decreased glucagon secretion [7,10], also occurred. As part of the phase 3 programme [the Liraglutide Effect and Action in Diabetes (LEAD) programme] with liraglutide in > 4000 subjects with T2D as monotherapy or in combination therapy, this 26-week trial examined liraglutide plus glimepiride compared with either placebo or rosiglitazone added to glimepiride on glycaemic control and body weight.\n\nBODY.SUBJECTS AND METHODS.STUDY PARTICIPANTS:\nInclusion criteria: T2D treated with oral glucose-lowering agents (OGLAs) for ≥ 3 months; 18–80 years of age; HbA1c 7.0–11.0% (previous OGLA monotherapy) or 7.0–10.0% (previous OGLA combination therapy); body mass index (BMI) ≤ 45.0 kg/m2. Exclusion criteria: used insulin within 3 months, impaired liver or renal function, uncontrolled hypertension (≥ 180/100 mmHg), cancer or used any drugs apart from OGLAs likely to affect glucose concentrations. Subjects provided written informed consent. The study was conducted in accordance with good clinical practice guidelines and approved by independent ethics committees.\n\nBODY.SUBJECTS AND METHODS.STUDY DESIGN:\nThe study was a 26-week, double-blind, double-dummy, randomized, active-control, five-armed parallel (116 sites in 21 countries, primarily Europe and Asia) trial enrolling 1041 subjects (1–37 subjects per centre), all receiving glimepiride (2–4 mg/day) in combination with (Fig. 1): FIGURE 1Overview of trial design and treatment arms. one of three liraglutide doses [0.6, 1.2 or 1.8 mg, injected subcutaneously (Novo Nordisk, Bagsvaerd, Denmark) and rosiglitazone placebo];liraglutide placebo and rosiglitazone placebo;liraglutide placebo and rosiglitazone 4 mg/day (rosiglitazone; AvandiaTM; GlaxoSmithKline, London, UK). The doses of rosiglitazone and glimepiride used were determined by the highest doses approved in all participating counties. After discontinuing previous OGLAs except glimepiride, separate 2-week titration and maintenance periods with glimepiride (open-label) preceded randomization (Fig. 1). Subjects were stratified according to previous treatment (monotherapy or combination therapy). After randomization, 2-week treatment titration and 24-week treatment (maintenance) phases (Fig. 1) were completed. Liraglutide was up-titrated weekly in 0.6-mg increments until allocated doses were reached. Glimepiride could be adjusted between 2 and 4 mg/day in case of hypoglycaemia or other adverse events (AEs), while other drug doses were fixed. Liraglutide (active and placebo) was supplied in 3-ml pre-filled pens with 31G needles (Novo Nordisk). Subjects were encouraged to inject liraglutide into the upper arm, thigh or abdomen at the same time each day. Rosiglitazone and glimepiride were taken in the morning or with the first meal.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.EFFICACY:\nThe primary endpoint was change from baseline HbA1c after 26 weeks of treatment. Secondary endpoints included: percentages of subjects reaching HbA1c (< 7.0%, ≤ 6.5%), FPG (5.0 to ≤ 7.2 mmol/l) and postprandial plasma glucose (PPG; 10.0 mmol/l) targets [11–13]; changes in body weight, FPG, mean PPG, indices of pancreatic B-cell function [pro-insulin : insulin ratio and homeostasis model assessment (HOMA)-B], HOMA-insulin resistance (HOMA-IR) and blood pressure (BP). HbA1c was measured centrally (MDS Pharma Services, King of Prussia, PA, USA) by high performance liquid chromatography while plasma glucose (PG) was self-measured using MediSense® glucose meters (Abbott Diagnostics Inc., Abbott Park, IL, USA). Insulin and C-peptide were measured by chemiluminescence, proinsulin by ELISA, while glucagon was measured in aprotinin-treated plasma by radioimmunoassay. The proinsulin : insulin ratio was calculated from fasting insulin and fasting proinsulin. HOMA-B and HOMA-IR were both calculated from FPG and fasting insulin. Samples measured centrally were collected and transported according to detailed procedures in the MDS Pharma Services manual. Samples stored at ambient temperature were shipped by courier to the central laboratory on the same day as collection, while frozen samples were shipped every 3 weeks.\n\nBODY.SUBJECTS AND METHODS.STUDY MEASUREMENTS.SAFETY:\nSafety variables included hypoglycaemic episodes based on PG levels (< 3.1 mmol/l), liraglutide antibodies including cross-reacting and neutralizing antibodies, tolerability (gastrointestinal complaints) and pulse. AEs, vital signs, electrocardiogram (ECG), biochemical and haematology measures including calcitonin were also monitored. Self-treated hypoglycaemic episodes were classified as minor, while those requiring third-party assistance were considered major. Serum antibodies against liraglutide were measured by radioimmunoprecipitation assay.\n\nBODY.SUBJECTS AND METHODS.STATISTICAL ANALYSES:\nAll efficacy and safety analyses were based on intent-to-treat criteria, defined as subjects who were exposed to ≥ 1 dose of trial product(s). Efficacy endpoints were analysed by ancova with treatment, country and previous glucose-lowering treatment as fixed effects and baseline values as covariates. Missing data were imputed by last observation carried forward (LOCF). Sample size calculations were based on predicted HbA1c and body weight after trial completion. As the three liraglutide + glimepiride groups were to be compared with both rosiglitazone + glimepiride and glimepiride monotherapy, two calculations were performed. These sample size calculations assumed a standard deviation of 1.2% of HbA1c, the non-inferiority/superiority margin vs. active control was set to 0.4% and the difference to detect (superiority vs. placebo) was set to 0.5%. For body weight, a coefficient of variation of 3% (based on phase 2a trials for liraglutide) and a difference to detect of 3% were assumed. A combined power (calculated as the product of the marginal powers for HbA1c and body weight) of at least 85% was required. These calculations indicated that at least 168 and 81 patients completing the study would be needed for the combination and glimepiride monotherapy groups, respectively. Assuming a drop-out rate of 25%, targets for randomization were 228 in each of the combination therapy groups and 114 in the placebo group (total n = 1026). To protect against Type 1 errors, HbA1c was analysed using hierarchical testing for descending doses of liraglutide. First, superiority of liraglutide 1.8 mg to placebo was tested and, only if superior to placebo, non-inferiority to rosiglitazone was tested. If non-inferiority was obtained, superiority to rosiglitazone for liraglutide 1.8 mg was tested and superiority to placebo for liraglutide 1.2 mg was tested. If superiority was confirmed, non-inferiority to rosiglitazone would be tested and so on (i.e. testing sequence was stopped when hypotheses could not be rejected). Superiority was concluded when upper limits of two-sided 95% confidence intervals (CIs) for treatment differences were below 0%; non-inferiority was concluded if these values were < 0.4%; for secondary endpoints, Type 1 errors were controlled by estimating simultaneous CIs using Dunnett's method. Proportions of subjects achieving HbA1c (HbA1c < 7.0%, and ≤ 6.5%) and FPG (5.0 ≤ FPG ≤ 7.2 mmol/l) targets [13] were compared between treatments using logistic regression with allocated treatment and baseline values as covariates. Chi-square analyses assessed differences in treatments for percentages of subjects achieving no, one, two or three PPG values < 10 mmol/l [13]. Hypoglycaemic episodes were analysed under the assumption that number per subject were negatively binomially distributed using a generalized linear model, including treatment and country as fixed effects. Other safety data were compared by descriptive statistics. Values for descriptive statistics are expressed as means ± sd, while ancova results are expressed as least square means ± SEM or with 95% CI unless otherwise noted. Significance levels were set to 5% for two-sided tests and 2.5% for one-sided tests.\n\nBODY.RESULTS.DISPOSITION AND DEMOGRAPHICS:\nThe treatment groups were well balanced (Table 1). Of 1712 subjects screened, 1041 were randomized and 1040 were exposed to trial drugs; 147 subjects (14.1%) withdrew (Fig. 2). Withdrawals were higher with placebo (27%) and rosiglitazone treatment (16%) compared with liraglutide 0.6 mg (11%), liraglutide 1.2 mg (14%) and liraglutide 1.8 mg (9%) treatment. Thirty-eight subjects (3.7%) withdrew as a result of AEs (Fig. 2). Table 1 Demographic characteristics of study participants Liraglutide 0.6 mg ( n = 233) Liraglutide 1.2 mg ( n = 228) Liraglutide 1.8 mg ( n = 234) Placebo ( n = 114) Rosiglitazone ( n = 232) Male : female (%) 54 : 46 45 : 55 53 : 47 47 : 53 47 : 53 Age (years) 55.7 ± 9.9 57.7 ± 9.0 55.6 ± 10.0 54.7 ± 10.0 56.0 ± 9.8 Duration of diabetes (years) 6.5 (4.0,10.2) 6.7 (4.0,10.7) 6.5 (3.7,10.5) 6.5 (4.5,10.6) 6.6 (4.3,10.7) Previous on mono : combi (%) 30 : 70 31 : 69 27 : 73 32 : 68 32 : 68 FPG (mmol/l) 10.0 ± 2.4 9.8 ± 2.7 9.7 ± 2.4 9.5 ± 2.0 9.9 ± 2.5 HbA 1c (%) 8.4 ± 1.0 8.5 ± 1.1 8.5 ± 0.9 8.4 ± 1.0 8.4 ± 1.0 Diabetic retinopathy (%) 17.2 14.9 12.0 13.2 16.4 Hypertension (%) 69.1 68.0 69.7 64.9 66.8 BMI (kg/m 2 ) 30.0 ± 5.0 29.8 ± 5.1 30.0 ± 5.1 30.3 ± 5.4 29.4 ± 4.8 Weight (kg) 82.6 ± 17.7 80.0 ± 17.1 83.0 ± 18.1 81.9 ± 17.1 80.6 ± 17.0 Systolic blood pressure (mmHg) 131 ± 16 133 ± 15 132 ± 16 131 ± 15.3 133 ± 15 Data are mean ± sd and percentages, except for duration of diabetes, where data are median, 25th and 75th percentile. BMI, body mass index; FPG, fasting plasma glucose; HbA 1c , glycated haemoglobin; mono : combi, previous treatment with either monotherapy or combination therapy; sd , standard deviation. FIGURE 2Flow of patients through the study.\n\nBODY.RESULTS.EFFICACY.HBA:\nHbA1c decreased rapidly with all doses of liraglutide when added to glimepiride compared with either rosiglitazone or placebo (i.e. glimepiride monotherapy), irrespective of previous therapy. The greatest decreases occurred with liraglutide 1.2 and 1.8 mg (Fig. 3a–c). After 26 weeks, HbA1c decreased by 1.1% from baseline (primary endpoint) with either liraglutide 1.2 or 1.8 mg, respectively, compared with either placebo (+0.2%) or rosiglitazone (−0.4%) (Fig. 3d). Estimated treatment differences and 95% CIs to placebo were: liraglutide 1.8 mg: −1.4% (1.6, −1.1); liraglutide 1.2 mg: −1.3% (1.5, −1.1); liraglutide 0.6 mg: −0.8% (−1.1, −0.6); rosiglitazone: −0.7% (−0.9, −0.4). All liraglutide doses were superior to placebo (P < 0.0001), while the two higher liraglutide doses were superior to rosiglitazone (P < 0.0001). Liraglutide 0.6 mg was non-inferior to rosiglitazone. Rosiglitazone also was superior to placebo (P < 0.0001). FIGURE 3Mean glycated haemoglobin (HbA1c) by treatment and week (intent-to-treat population with last observation carried forward): (a) overall population; (b) previously on monotherapy; or (c) previously on combination therapy; (d) mean changes in HbA1c from baseline after 26 weeks of treatment. Keys: (a–c) liraglutide 0.6 mg: grey dotted line with squares; liraglutide 1.2 mg: black solid line with triangles; liraglutide 1.8 mg: black dotted line with squares; rosiglitazone: grey solid line with circles; placebo: black solid line with circles. (d) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. ****P < 0.0001 compared with placebo; ††††P < 0.0001 compared with rosiglitazone. HbA1c decreases were greater for subjects who entered from monotherapy compared with combination therapy (Fig. 3d). However, because the increase with placebo was higher for individuals entering on combination therapy (0.7 vs. 0.23%), the differences between treatment groups in favour of liraglutide were similar irrespective of whether subjects were treated previously with monotherapy or combination therapy. Neither age, gender nor BMI affected these trends.\n\nBODY.RESULTS.EFFICACY.PERCENTAGE REACHING AN HBA:\nThe percentage of subjects reaching ADA [2] and International Diabetes Federation (IDF)/American Association of Clinical Endocrinologists (AACE) [11,12] treatment HbA1c goals with liraglutide was dose dependent (Fig. 4). At week 26, 42% and 21% of subjects treated with liraglutide 1.8 mg reached an HbA1c < 7.0% and ≤ 6.5%, respectively, compared with 8% and 4% for placebo (Fig. 4). The estimated proportion of subjects treated with either liraglutide 1.2 or 1.8 mg reaching ADA/EASD and IDF/AACE HbA1c targets was substantially greater compared with either placebo (P < 0.0001) or rosiglitazone (Fig. 4; P ≤ 0.0003), with more patients reaching < 7.0% with liraglutide 1.8 mg compared with 1.2 mg (P = 0.018). FIGURE 4Subjects achieving specified glycated haemoglobin (HbA1c) levels: (a) percentage reaching HbA1c < 7.0% (American Diabetes Association/European Association for the Study of Diabetes target); (b) percentage reaching HbA1c < 6.5% (International Diabetes Federation/American Association of Clinical Endocrinologists targets); (c) cumulative distribution of HbA1c at 26 weeks for the intent-to-treat (ITT) population; and (d) for the ITT last observation carried forward (LOCF) population. Keys: (a, b) liraglutide 0.6 mg: black stripes on white; liraglutide 1.2 mg: white stripes on black, liraglutide 1.8 mg: grey tint; rosiglitazone: white; placebo: black. (c, d) liraglutide 0.6 mg: pale grey solid line; liraglutide 1.2 mg: grey solid line, liraglutide 1.8 mg: black solid line; rosiglitazone: dotted black line; placebo: dotted grey line; baseline visit: long dashed black line. ****P < 0.0001 or **P < 0.01 compared with placebo; ††††P < 0.0001 or †††P = 0.0005 compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.FASTING PLASMA GLUCOSE:\nBy week 2, subjects treated with liraglutide had rapid and larger decreases in FPG vs. comparator treatment. At week 26, all doses of liraglutide decreased FPG more than did placebo (Fig. 5; P < 0.0001), while only liraglutide 1.2 or 1.8 mg produced greater reductions than rosiglitazone. FPG treatment differences to placebo were 1.7 mmol/l for liraglutide 0.6 mg and 2.6 mmol/l for both liraglutide 1.2 and 1.8 mg. An 0.7-mmol/l greater reduction in FPG was achieved with either liraglutide 1.2 or 1.8 mg compared with rosiglitazone (P ≤ 0.006) after 26 weeks. FIGURE 5Mean changes from baseline in fasting plasma glucose after 26 weeks of treatment. ****P < 0.0001 compared with placebo; ††P < 0.01 compared with rosiglitazone. The percentage of subjects achieving FPG values between 5.0 mmol/l and ≤ 7.2 mmol/l (ADA target) after 26 weeks was higher with liraglutide: 0.6 mg (19%; P = 0.002); 1.2 mg (37%; P < 0.001); and 1.8 mg (38%;P < 0.001) compared with placebo (7%). The liraglutide 1.2 and 1.8 mg treatment groups also had more subjects achieving the same FPG target at end of treatment compared with rosiglitazone (26%) (P = 0.007 and P = 0.01, respectively).\n\nBODY.RESULTS.EFFICACY.POSTPRANDIAL PLASMA GLUCOSE:\nPPG was reduced similarly after each meal. The greatest reductions in mean PPG values from baseline (average of values obtained 90 min after breakfast, lunch and evening meal) occurred with liraglutide 1.2 mg (2.5 mmol/l) and liraglutide 1.8 mg (2.7 mmol/l). By comparison, the reduction from baseline in mean PPG values was 1.8 mmol/l for rosiglitazone and liraglutide 0.6 mg and 0.4 mmol/l for placebo. Treatment differences for PPG were greater with all doses of liraglutide compared with placebo (1.5–2.4 mmol/l; P < 0.0001) and greater with liraglutide 1.2 mg (0.64 mmol/l; P = 0.043) and 1.8 mg (0.87 mmol/l;P = 0.0022) compared with rosiglitazone.\n\nBODY.RESULTS.EFFICACY.PPG MEASUREMENTS < 10.0 MMOL/L:\nThe percentage of subjects with one, two or three PPG measurements < 10.0 mmol/l (ADA target) were greater for all doses of liraglutide compared with placebo (P < 0.05) but not rosiglitazone.\n\nBODY.RESULTS.BODY WEIGHT:\nMean weight at baseline was 81.6 kg. Mean reductions in weight from baseline to end of treatment were 0.2 kg with liraglutide 1.8 mg and 0.1 kg with placebo treatment, while increases occurred with either liraglutide 0.6 mg (0.7 kg), liraglutide 1.2 mg (0.3 kg) or rosiglitazone (2.1 kg) (Fig. 6). Unlike rosiglitazone, weight did not increase substantially with liraglutide and the differences between rosiglitazone and liraglutide were statistically significant (−2.3 to −1.4 kg; P < 0.0001), although there were no significant differences compared with placebo. Gender appeared to have no influence on the results, as indicated when added as a fixed effect in the ancova model. FIGURE 6Mean changes in body weight from baseline after 26 weeks of treatment. *P < 0.05 compared with placebo; ††††P < 0.0001 compared with rosiglitazone.\n\nBODY.RESULTS.INDICES OF PANCREATIC B-CELL FUNCTION AND INSULIN RESISTANCE:\nReductions in the proinsulin : insulin ratio were greater with both liraglutide 1.2 and 1.8 mg compared with either rosiglitazone or placebo (Table 2; P ≤ 0.02). HOMA-B increased with liraglutide (1.8 or 1.2 mg) compared with rosiglitazone (P < 0.05), while this increase was only different to placebo with liraglutide 1.2 mg (P = 0.01) and not liraglutide 1.8 mg (P = 0.051). There were no significant differences between treatments for HOMA-IR. Table 2 Selected indices of pancreatic B-cell function Variable Treatment Baseline Week 26 (LOCF) Least square difference from placebo (95% CI) Least square difference from rosiglitazone (95% CI) Proinsulin : insulin ratio Liraglutide 0.6 mg 0.42 ± 0.22 0.38 ± 0.24 −0.05 (−0.11; 0.00) −0.02 (−0.06; 0.03) Liraglutide 1.2 mg 0.45 ± 0.31 0.33 ± 0.20 −0.10 (−0.16; −0.05) † −0.07 (−0.11; −0.02) * Liraglutide 1.8 mg 0.48 ± 0.33 0.36 ± 0.20 −0.09 (−0.15; −0.03) * −0.05 (−0.10; −0.01) * Placebo 0.44 ± 0.27 0.46 ± 0.29 Rosiglitazone 0.45 ± 0.29 0.40 ± 0.20 HOMA-B (%) Liraglutide 0.6 mg 51 ± 43.3 70 ± 88.6 15 (−19.10; 49.0) 11 (−16.7; 39.0) Liraglutide 1.2 mg 71 ± 254.3 99 ± 184.3 43 (8.10; 76.9) * 39 (10.3; 67.0) * Liraglutide 1.8 mg 56 ± 84.6 91 ± 108.2 34 (−0.23; 68.5) 30 (2.00; 58.6) * Placebo 56 ± 103.3 52 ± 107.3 Rosiglitazone 46 ± 36.2 59 ± 63.3 * P ≤ 0.05; † P < 0.0001. CI, confidence interval; HOMA, homeostatis model assessment; LOCF, last observation carried forward. \n\nBODY.RESULTS.BLOOD PRESSURE AND PULSE:\nAlthough decreases in systolic blood pressure occurred with either liraglutide 1.2 or 1.8 mg (2.6–2.8 mmHg), they were not significantly different from placebo or rosiglitazone (0.9–2.3 mmHg). Reductions in diastolic blood pressure also occurred with all treatments (0.7–1.4 mmHg), with no significant differences between treatments. Pulse increases above baseline ranged from 2 to 4 beats/min with the three doses of liraglutide and 1 beat/min with rosiglitazone, while pulse decreased by 1 beat/min with placebo. Changes in pulse for all doses of liraglutide were significant vs. placebo (P ≤ 0.002). This also was true with either liraglutide 1.8 or 1.2 mg compared with rosiglitazone (P < 0.01).\n\nBODY.RESULTS.SAFETY:\nThe most common treatment-emergent AEs that were considered by investigators to be either possibly or probably related to liraglutide were gastrointestinal (diarrhoea, nausea, dyspepsia and constipation) and nervous system disorders (headache and dizziness), particularly during the first 4 weeks. Nausea was highest with liraglutide 1.2 mg (10.5%) and lowest with placebo (1.8%). Vomiting (4.4%) and diarrhoea (7.9%) were also higher with liraglutide 1.2 mg. Withdrawals because of nausea ranged from 0.9–2.2%, vomiting 0.4–0.9% and diarrhoea 0–1.3%. Nausea was more common with liraglutide compared with placebo and rosiglitazone, particularly during the first 4 weeks (Fig. 7). Frequency of nausea was less in the liraglutide 0.6 mg treatment group compared with the higher doses of liraglutide. Generally, the occurrence of nausea dissipated from 4 to 26 weeks of treatment in all groups using liraglutide (Fig. 7). FIGURE 7Percentage of subjects experiencing nausea over the course of the study. Key: liraglutide 0.6 mg with glimepiride: black line with filled circles; liraglutide 1.2 mg with glimepiride: black line with filled triangles; liraglutide 1.8 mg with glimepiride: grey line with hollow circles; glimepiride grey lines with filled squares; rosiglitazone and glimepiride: grey line with hollow triangles. The incidence of serious AEs ranged between 3 and 5%: placebo (3%), rosiglitazone (3%), liraglutide 0.6 mg (3%), liraglutide 1.2 mg (4%) and liraglutide 1.8 mg (5%). Most treatment-emergent serious AEs were judged by investigators to be unlikely to be related to trial products. No deaths were reported during the trial. One subject developed chronic pancreatitis whilst taking liraglutide 0.6 mg; the person had no reported previous history of pancreatitis. The subject continued on liraglutide therapy and completed the trial. At screening, five patients had been previously diagnosed with pancreatitis. As pancreatitis was not an exclusion criterion, these patients were randomized as follows: one to liraglutide 0.6 mg, one to liraglutide 1.2 mg, two to liraglutide 1.8 mg and one to rosiglitazone + glimepiride. All five patients completed the trial without reporting pancreatitis as an adverse event. Hypoglycaemia was infrequent with all treatments. One major hypoglycaemic episode (self-measured blood glucose = 3.0 mmol/l) occurred 9 days after treatment started in a subject receiving liraglutide 1.8 mg in combination with glimepiride. Although medical assistance was not needed, the subject required third-party assistance. The investigator judged the episode as likely to be related to glimepiride and reduced the dose from 4 to 3 mg after the incident. Minor hypoglycaemia occurred in < 10% of subjects for any treatment. The proportion of subjects experiencing minor hypoglycaemia during the trial was lowest with placebo (i.e. glimepiride monotherapy 2.6%; 0.17 events/subject-year), comparable with liraglutide 0.6 mg (5.2%, 0.17 events/subject-year) and rosiglitazone (4.3%, 0.12 events/subject-year) groups and similar between the liraglutide 1.2 mg (9.2%, 0.51 events/subject-year) and liraglutide 1.8 mg (8.1%, 0.47 events/subject-year) treatment groups. Incidence was higher with liraglutide 1.2 mg (P = 0.0024) and 1.8 mg (P = 0.0065) compared with rosiglitazone and liraglutide 1.2 mg compared with placebo (P = 0.048), occurring in the setting of lower mean HbA1c values. Antibodies to liraglutide were found in 9–13% of subjects treated with liraglutide. No significant effects of these antibodies on HbA1c were found in pooled analyses of four trials including the current study. There were no clinically relevant changes in ophthalmoscopy, biochemistry, urinalysis, haematology or ECG assessments. No significant differences in calcitonin were found between the three groups treated with liraglutide when compared with either placebo or rosiglitazone at the end of the trial at week 26.\n\nBODY.DISCUSSION:\nTreatment with liraglutide plus glimepiride was superior to glimepiride monotherapy at all doses of liraglutide and superior to rosiglitazone plus glimepiride for the two higher liraglutide doses for improving HbA1c. Similar findings for reductions in FPG and PPG highlight improved 24-h glucose control with once-daily liraglutide, with substantially more subjects reaching glycaemic targets, particularly with liraglutide 1.8 mg. Improvements in pancreatic B-cell function were larger with liraglutide 1.2 and 1.8 mg compared with rosiglitazone. Liraglutide was well tolerated and occurrence of gastrointestinal AEs was low overall, particularly after week 4. Although rates of hypoglycaemia were low in all treatment groups (< 10%), minor hypoglycaemic events occurred more often in patients treated with glimepiride plus liraglutide 1.2 or 1.8 mg than with glimepiride alone. It should be noted, however, that patients treated with liraglutide 1.2 or 1.8 mg achieved a lower HbA1c than those receiving glimepiride monotherapy. At lower HbA1c levels, sulphonylureas are known to elicit hypoglycaemia more readily than at higher levels. In clinical practice it may be possible to reduce the dose of sulphonylurea (when used with liraglutide) to minimize risk of hypoglycaemia and maintain HbA1cimprovements. Although weight effects were modest, liraglutide produced more favourable weight effects compared with rosiglitazone, which produced substantial weight gain. In other studies with liraglutide, subjects adding a 1.8-mg dose to metformin lost 2.8 kg [14], while those adding both metformin and glimepiride lost 1.8 kg compared with placebo [15] (both over 26 weeks) and those on liraglutide monotherapy (1.8 mg) lost 2.45 kg over 52 weeks [16]. In our study, because sulphonylureas usually cause weight gain, inclusion or optimization of glimepiride but not metformin may have mitigated the weight benefits typically associated with liraglutide. Lack of weight effects could be secondary to lower baseline body weight, withdrawal of previous metformin treatment or defensive snacking to minimize risk of hypoglycaemia. It might have been expected that the greater weight gain with rosiglitazone compared with liraglutide 1.8 mg would be associated with a concurrent increase in insulin resistance with rosiglitazone. The absence of this effect could reflect the insulin-sensitizing nature of rosiglitazone. Improvements in pancreatic B-cell function associated with liraglutide are consistent with other studies [7–9]. Study strengths include inclusion of both placebo and active (rosiglitazone) comparators and that OGLAs were optimized (not maximized) before randomization to minimize risk of hypoglycaemia. Limitations of the study include short duration of the trial and restriction on glimepiride and rosiglitazone in some countries that precluded maximal dosing. The impact of using other GLP-1-based treatments [such as exenatide, or the dipeptidyl peptidase-4 (DPP-4) inhibitor, sitagliptin] with sulphonylureas in subjects with T2D has been studied. In a 30-week American trial where exenatide twice a day was added to sulphonylureas, HbA1c was reduced by 0.46% from baseline with 5 μg and 0.86% with 10 μg [17] compared with 1.1% with liraglutide 1.8 or 1.2 mg. This reduction in HbA1c with liraglutide is consistent with other LEAD trials investigating liraglutide as monotherapy or in combination with various OGLA drugs. In these trials, HbA1c was reduced by 1–1.5%[14,16,18–20]. Reductions in FPG with exenatide were 0.3 and 0.6 mmol/l from baseline with 5 μg and 10 μg, respectively, compared with 1.4 mmol/l with liraglutide 1.8 mg; weight loss of 1.6 kg occurred with exenatide 10 μg compared with 0.2 kg for liraglutide 1.8 mg [17]. Differences in weight effects may be as a result of lower baseline weight in this trial (82 kg) compared with exenatide (96 kg) and discontinuation of previous metformin therapy, unlike the exenatide trial where exenatide was added to previous sulphonylurea monotherapy [17]. Other large-scale trials with liraglutide in combination with sulphonylureas have demonstrated weight loss of 2–3 kg [18,20]. Withdrawals from exenatide trials ranged from 24–30% compared with 9–14% with liraglutide in this study. Nausea with exenatide ranged from 39% with 5 μg to 51% with 10 μg [17] compared with 10.5% for liraglutide. Furthermore, 41% were positive for anti-exenatide antibodies compared with 9–13% with anti-liraglutide antibodies. With sitagliptin 100 mg once daily for 24 weeks, HbA1c decreased by 0.3% from baseline in subjects receiving glimepiride, with 11% achieving an HbA1c < 7.0%[21]. Reductions in FPG and PPG from baseline were 0.05 and 1.4 mmol/l, respectively, while weight increased by 0.8 kg and the prevalence of nausea was < 1%. Although head-to-head trials are required to test true differences between these agents, the marked effects of liraglutide on FPG may be as a result of consistent blood levels of liraglutide maintained over 24 h compared with exenatide which has to be administered 60 min before breakfast and dinner and has a half-life of 1.5–3.6 h [22]. In a recent 26-week head-to-head trial comparing liraglutide with exenatide, liraglutide produced a 0.3% greater decrease on HbA1c (P < 0.0001) [20]. Because DPP-4 inhibitors inhibit the degradation of GLP-1, the efficacy of sitagliptin is dependent on levels of endogenous GLP-1 which is physiologically low compared with the much higher pharmacological levels of liraglutide. Pharmacological levels may be needed to induce satiety, weight loss and possibly larger HbA1c reductions. Liraglutide is an effective and well-tolerated once-daily human GLP-1 analogue that improves overall glycaemic control and indices of pancreatic B-cell function with minimal weight gain and risk of hypoglycaemia when used in combination with a sulphonylurea for T2D.\n\nBODY.COMPETING INTERESTS:\nThe study was funded by Novo Nordisk, the manufacturer of liraglutide. In collaboration with the investigators, Novo Nordisk was responsible for the study design, protocol, statistical analysis plans, oversight, analysis and reporting of the results. Data were recorded at the clinical centres and maintained by the sponsor. The LEAD-1 SU study group had full access to the data. Final responsibility for the decision to submit the manuscript for publication was the authors. MM has received lecture fees from Novo Nordisk, Servier, MSD; JS has received honoraria, grants and lecture fees from Novo Nordisk; MB, WMWB and NAK have no conflicts to declare; JS has received lecture fees from Novo Nordisk; MZ is employed by, and holds stock in, Novo Nordisk; TLT is employed by Novo Nordisk; SC is a member of the international advisory board on liraglutide for Novo Nordisk and has received lecture fees from Novo Nordisk.\n\n**Question:** Compared to Placebo plus glimepiride what was the result of Liraglutide (1.2 mg) plus glimepiride on ADA fasting plasma glucose goals between 5.0 mmol/l and less than 7.2 mmol/l?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
400
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Incisional hernia after upper abdominal surgery: a randomised controlled trial of midline versus transverse incision\n\n ABSTRACT.OBJECTIVES:\nTo determine whether a transverse incision is an alternative to a midline incision in terms of incisional hernia incidence, surgical site infection, postoperative pain, hospital stay and cosmetics in cholecystectomy.\n\nABSTRACT.SUMMARY BACKGROUND DATA:\nIncisional hernias after midline incision are commonly underestimated but probably complicate between 2 and 20% of all abdominal wall closures. The midline incision is the preferred incision for surgery of the upper abdomen despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia. A RCT was preformed in the pre-laparoscopic cholecystectomy era the data of which were never published.\n\nABSTRACT.METHODS:\nOne hundred and fifty female patients were randomly allocated to cholecystectomy through midline or transverse incision. Early complications, the duration to discharge and the in-hospital use of analgesics was noted. Patients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications such as fistula, wound dehiscence and incisional hernia after a minimum of 12 months follow-up.\n\nABSTRACT.RESULTS:\nTwo percent (1/60) of patients that had undergone the procedure through a transverse incision presented with an incisional hernia as opposed to 14% (9/63) of patients from the midline incision group (P = 0.017). Transverse incisions were found to be significantly shorter than midline incisions and associated with more pleasing appearance. More patients having undergone a midline incision, reported pain on day one, two and three postoperatively than patients from the transverse group. The use of analgesics did not differ between the two groups.\n\nABSTRACT.CONCLUSIONS:\nIn light of our results a transverse incision should, if possible, be considered as the preferred incision in acute and elective surgery of the upper abdomen when laparoscopic surgery is not an option.\n\nBODY.INTRODUCTION:\nThe rate of incisional hernia after midline incision is commonly underestimated but probably lies between 2 and 20% [1–5]. Thus, incisional hernia is a major postoperative problem. The treatment of incisional hernia is complicated by high rates of recurrences. Recently, in a randomised controlled trial published by Burger et al. [6], midline incisional hernia repair has been shown to be associated with a 10-year cumulative recurrence rate of 63 and 32% for suture and mesh repair, respectively. The midline incision is the preferred incision for surgery of the upper abdomen, despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia [7]. Various approaches to opening the abdomen have been advocated over time. The choice for a certain incision is dependent on the exposure necessary for the desired procedure to succeed. A midline incision, be it supraumbilical, infraumbilical or both, is an approach especially suited for emergency and exploratory surgery because of the quick and generous exposure that can be achieved within a few minutes [8, 9]. The avascular nature of the linea alba minimises blood loss during this procedure. A supraumbilical transverse incision may be utilised in case exposure of the upper abdomen is desired. During this incision, the damage inflicted to the segmental arteries and nerves is previously described as being minimal [10]. Previously, only one randomised controlled trial, comparing transverse and true midline incisions, has been published specifically addressing incisional hernia incidence [11]. To determine whether the use of a transverse incision is an alternative to a midline incision for open cholecystectomy in terms of incisional hernia incidence, surgical site infection, postoperative pain and hospital stay, this randomised controlled trial was performed. This trial was conducted in an era when laparoscopic cholecystectomy was not yet available. The possibility of low incisional hernia rates after transverse incisions and the fact that little is known about potential advantages incited us to publish the relevant results of this randomised controlled trial which has been performed in the past and has only been reported in a Dutch thesis by one of the authors (H.L.). The primary endpoint of this study was the incisional hernia incidence after 12 months of follow-up. Secondary endpoints included pain and cosmetic appearance.\n\nBODY.METHODS.PROTOCOL:\nSome 150 consecutive female patients were randomly assigned to a midline or transverse incision as an approach for elective cholecystectomy or combined cholecystectomy and cholangiography (with or without consecutive choledochotomy) (75 and 75 patients, respectively). Emergency procedures were excluded from participation. The sample size is based on an incisional hernia rate reduction from 20 to 6% at a power of 80% and an error rate of 5%. Obtaining informed consent was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975. The investigation reported was performed with informed consent from all of the patients and followed the guidelines for experimental investigation with human subjects and was approved by the medical ethics committee. An independent statistician prepared closed, tamper-proof envelopes containing the random allocation (Fig. 1). Patients were randomised for one of the procedures in theatre through the opening of the envelopes.Fig. 1Flow chart of patient inclusion and follow-up Patient-related factors that were recorded were age, body mass and length and date of operation. Operation-related factors that were recorded were the exact nature of the operation, length of the incision, the thickness of the subcutaneous fat, surgeon performing the procedure, as well as the duration of the operation (skin-to-skin time). In the immediate postoperative period, the use, dose and type of analgesics was recorded and a pain score was administered. The use of analgesics (morphine 7.5 mg intra-muscular injection, 4 h minimum interval between consecutive injections) was monitored for 48 h after surgery; the pain score was administered for the first 6 days after surgery. In patients assigned to surgery through a midline incision, the skin was incised from just below the xyphoid process to just above the umbilicus. The abdominal wall was opened in the midline by incising the linea alba. A Collin type (two-bladed) self-retaining retractor was used to maintain exposure. The abdominal wall was closed in one layer using single polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). The skin was consequently closed using running monofilament nylon sutures (Ethilon; Ethicon, Amersfoort, The Netherlands). Patients randomised for a transverse incision received a right-sided unilateral transverse incision between 3 and 4 cm below the costal margin. The rectus muscle was incised. The fibres of the external and internal obliques and the transverse muscles were separated in the direction of their course. Exposure was achieved through the use of a manually held single-bladed retractor. Closure of the abdominal wall was achieved by closure of the peritoneum and the posterior rectus fascia using a continuous, polygalactin 910 suture (Vicryl; Ethicon, Amersfoort, The Netherlands). The anterior rectus sheath and the fascia of the internal and external transverses were closed using simple interrupted polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). Towards the end of both procedures, a Redon low-vacuum drain catheter was placed, which was guided outside the abdominal cavity approximately 5 cm from the incision. The skin was consequently closed using continuous monofilament nylon suture (Ethilon; Ethicon, Amersfoort, The Netherlands). All patients received a dose of 5,000 IU of sodium–heparin on the morning of the procedure as thrombosis prophylaxis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Pearson χ2 test was used for comparing percentages. In case of small expected numbers, a Fisher's exact test was performed. Continuous variables were analysed using the Mann–Whitney test. A P-value of 0.05 or less (two-sided) was considered to be statistically significant. Means and medians are expressed ±standard deviation (SD).\n\nBODY.METHODS.FOLLOW-UP:\nPatients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications, such as fistula, wound dehiscence and incisional hernia, after a minimum of 12 months follow-up. The patient and the surgeon evaluated the cosmetic results independently and were asked to rate the scar as unsatisfactory, satisfactory or fine. Furthermore, the length and width of the scar was measured.\n\nBODY.RESULTS.STUDY GROUP:\nSome 150 consecutive patients were randomised for participation in this study during an inclusion period from April 1977 until July 1979. Seventy-five patients received a transverse incision and 75 patients a midline incision (Fig. 1). One patient was withdrawn from further follow-up after developing peritonitis and consequent acute respiratory distress syndrome (ARDS) not related to the closure of the abdominal wall 2 days after surgery (transverse incision group). The patients' average age was 51.9 and 51.4 years for the midline and the transverse incision groups, respectively. Furthermore, no differences were found in the body mass and average length between the two groups (Table 1). A cholecystectomy was performed using a transverse incision in 52 patients and utilising a midline incision in 52 patients also. Fifteen and 16 patients, respectively, underwent a combined cholangiography/cholecystectomy. A further 7 and 6 patients, respectively, were treated with a cholangiography/cholecystectomy plus additional choledochotomy and the postexploratory placement of a T-tube.Table 1Baseline characteristics of the patients undergoing surgery, according to study groupVariableMidline incisionTransverse incisionn = 75n = 74Average age (years) ± SD51.9 ± 14.851.4 ± 13.8Average weight (kg) ± SD71.3 ± 14.568 ± 14.3Average length (cm) ± SD163.5 ± 7.8164 ± 7.3\n\nBODY.RESULTS.SURGEON:\nStaff surgeons performed 17% (13/75 patients) of all procedures performed through a midline incision. The remainder of the procedures through a midline incision was carried out under staff surgeon supervision. Staff surgeons performed 14% of all procedures in the transverse incisions study group (10/74 patients) and supervised the remainder. No statistically significant difference was found between the two randomised groups (P = 0.65).\n\nBODY.RESULTS.DURATION OF SURGERY:\nNo significant difference was noted in the skin-to-skin time (in min) for the two different incisions (Table 2). Surgery utilising midline and transverse incision took 56.9 ± 29.3 and 53.2 ± 26.8 min, respectively (P = 0.35). The total duration of the procedures until extubation (in min) did not differ between the midline and transverse incisions (71.0 ± 30.5 and 67.0 ± 27.3, respectively, P = 0.34).Table 2Length of incision, thickness of subcutaneous fat and skin-to-skin time, according to study groupVariableMidline incisionTransverse incisionP-valueLength of incision (mm) ± SDa164 ± 28140 ± 24<0.0001Thickness of subcutaneous fat (mm) ± SDa34.5 ± 13.030.3 ± 12.40.05Skin-to-skin time (min) ± SDa56.9 ± 29.353.2 ± 26.80.40Width of scar (mm) ± SDb8.3 ± 1.43.3 ± 1.2<0.0001aMeasured during surgery in 75 midline and 74 transverse incisionsbMeasured at follow-up in 63 and 60 midline and transverse incisions, respectively\n\nBODY.RESULTS.PAIN AND ANALGESICS:\nSignificantly more patients, having undergone a midline incision, reported pain on day one, two and three postoperatively (P < 0.0001, Table 3). In the midline incision group, 28/75 patients required no or only one dose of analgesics; the remainder required two or more doses. Thirty-one patients operated through a transverse incision required no analgesics or only one dose; 43 patients (the remainder) required two or more. No significant difference in the use of analgesics was found between the groups (P = 0.69).Table 3Postoperatively reported pain, according to study group, shown as the number of patients reporting pain at the time points indicated (percentage), with the remainder of patients reporting no painTime point after surgeryMidline incision n = 75Transverse incision n = 74P-valuePatients reporting pain, n (%)Patients reporting pain, n (%)3–4 h68 (91)60 (81)0.09First day64 (85)39 (53)<0.0001Second day57 (76)23 (31)<0.0001Third day28 (37)9 (12)<0.0001Fourth day5 (7)3 (4)0.72Fifth day0 (0)1 (1)0.50Sixth day0 (0)1 (1)0.50\n\nBODY.RESULTS.COMPLICATIONS:\nPostoperative complications (Table 4) were seen in 16 out of 75 patients (21%) from the midline incision group and in 15% from the transverse incision group (11 patients) (P = 0.30). Briefly, one patient in each group developed cardiac complications; 8 and 6 patients developed urinary retention after the midline and transverse incisions, respectively (P = 0.59). Surgical site infections were diagnosed in 7 and 3 patients, respectively (P = 0.33).Table 4Rate of complications after surgery, according to study group, shown as the number of patients diagnosed with complications (percentage)ComplicationMidline incisionTransverse incisionP-valuen = 75 n (%)n = 75 n (%)Cardiac1 (1)1 (1)1Urinary retention8 (12)6 (8)0.59ARDS01 (1)0.50Surgical site infection7 (9)3 (4)0.33Haemorrhage1 (1)00.50Pneumonia01 (1)0.50Total17 (23)12 (16)0.30\n\nBODY.RESULTS.DISCHARGE:\nForty-five (60%) and 42 (57%) patients from the patients having undergone a midline or a transverse incision, respectively, were discharged on day 6 or 7 postoperatively. The remaining patients from each group left hospital care on day 8 or later. The duration of hospital admission did not differ between the two types of incision (P = 0.74).\n\nBODY.RESULTS.COSMETICS:\nThe width and length of all incisions was measured during the follow-up visit (Table 2). The mean width of the scar after the healing of the midline incisions was found to be 8.3 ± 1.4 mm. The mean width of the scar after the healing of the transverse incisions was measured to be 3.3 ± 1.2 mm. This observed difference is significant (P < 0.0001). The length of the incisions was 140 ± 24 mm and 164 ± 28 mm for the transverse and the midline incisions, respectively. The difference in scar length was found to be significant (P < 0.0001).\n\nBODY.FOLLOW-UP:\nEighty-one percent of all patients operated through a transverse incision were seen during the follow-up examination (n = 60). Of the patients operated through a midline incision, 63 out of 75 were seen at the outpatient clinic (84%). The patients that were lost to follow-up could either not be traced or had deceased (Fig. 1). The minimum follow-up for the evaluation of cosmetic results and hernia incidence was 12 months and the maximum was 36 months.\n\nBODY.FOLLOW-UP.INCISIONAL HERNIA:\nFrom the patients that had undergone the procedure through a transverse incision, one (1/60; 2%) presented with an incisional hernia as opposed to 9 patients from the midline incision group (9/63; 14%); 95% confidence interval (CI) 7.5–25.4%. This difference in hernia incidence is significant (P = 0.017). No significant correlation was found between the incisional hernia rate and surgical site infection (P = 0.07).\n\nBODY.FOLLOW-UP.SUBJECTIVE COSMETICS:\nPatients and surgeons alike were asked to rate the appearance of the scar during the postoperative follow-up outpatient clinic visit. Both the surgeons and the patients found the scar resulting from the transverse incision to be more cosmetically pleasing (P < 0.0001 and P = 0.03, respectively, Table 5).Table 5Number of patients and surgeons rating the cosmetics of a scar at follow-upScoreMidline incision (n = 63)Transverse incision (n = 60)Patients, n (%)Surgeons, n (%)Patients, n (%)Surgeons, n (%)Unsatisfactory6 (10)25 (40)2 (3)6 (10)Satisfactory16 (25)27 (43)9 (15)12 (20)Fine41 (65)11 (17)49 (82)42 (70)Total63636060Difference between type of incision: patients P = 0.03; surgeons P < 0.0001\n\nBODY.DISCUSSION:\nThis prospective randomised study of transverse and midline incisions for open cholecystectomy shows that a significant reduction of incisional hernia incidence can be achieved through the use of a transverse incision. Only one other study (published in 1980) reported the incidence of incisional hernia after upper abdominal midline and unilateral transverse incision in a randomised trial. No difference between the two techniques (8 and 6% incisional hernia, respectively) was found, but the relatively short follow-up of 6 months, however, may be held accountable for this finding [11]. Three retrospective studies showed rates of incisional hernia of 3.2, 5.4 and 16.5% for midline incision and 1.3, 6.7 and 13.4% for transverse incision without statistically significant differences [12–14]. The possible reason for the rather high incidence of incisional hernia in the midline incision group (14%) may lie in the use of resorbable 910 polygalactin sutures. Nevertheless, the use of the same type of resorbable suture in the closure of the transverse incisions resulted in a 2% hernia rate. There is evidence for the importance of proper technique and choice of incision as a means to reduce incisional hernia being more important than the use of suture material [7]. Furthermore, as mentioned above, it is known that the incidence of incisional hernia in the case of a midline incision lies between 2 and 20%. From our data, the NNT (numbers needed to treat) is calculated to be 8 (95% CI 5–30) and the RRR (relative risk reduction) is 88% (95% CI 23–100%). Luijendijk et al. [15] have published a hernia rate of 2% after Pfannenstiel incisions closed using 910 polygalactin, which is in agreement with our findings in the patients randomised for a transverse incision, emphasising the importance of the incision over the choice of suture material. In our study, significantly fewer patients reported pain on day 1, 2 and 3 after transverse incisions, a result that was also described by other authors [16, 17]. Greenall et al. [18] published a contradictory report (in 1980) in which no significant difference in postoperative pain was found between midline and transverse incisions. The previously mentioned study, however, only analysed 46 out of 572 patients (8%) with regard to pain, which may explain the finding. In the same way, Lacy et al. suspended visual analogue pain scoring in a study comparing midline and transverse incisions for abdominal aortic surgery. Remarkably, the two groups in our study did not differ in terms of postoperative analgesia, a finding that is also reported by Lindgren et al. [17] and Donati et al. [19]. In our study, surgeons as well as patients were significantly more satisfied with the aesthetic appearance after a transverse in comparison with a midline incision. The scars after transverse incisions were found to be significantly shorter and less wide than the midline incisions, which may account for the observed difference. A possible reason for this is that a transverse incision is executed parallel to the prevailing direction of the skin lines on the abdomen and, therefore, the tension on the wound and consequent scar is low. Cholecystectomy has come a long way since this trial. The introduction and widespread acceptance of laparoscopic technique as the treatment of choice has rendered open cholecystectomy to be an operation for exceptional, and perhaps surgically difficult, circumstances. Nowadays, the study reported is hardly feasible, yet, the results are still applicable and very relevant for other surgical procedures in the (upper) abdomen. Knowledge of the favourable results of a transverse incision may aid surgeons in their choice when finding themselves in the unfortunate position of needing conversion to open cholecystectomy. In conclusion, this investigation on transverse incisions might be helpful in reducing the incidence of incisional hernia in patients after open cholecystectomy. The midline incision is a preferred manner to achieve exposure of the abdominal cavity and is considered to be easily performed and quick. Although the midline incision is generally accepted, the incidence of incisional hernias is surprisingly high [1–5]. The choice for a particular incision should not only be based on exposure, but also on hernia incidence reduction, especially since recurrence rates after hernia repair are reported to be very high. Furthermore, the recurrence rate after incisional hernia repair is a disappointing 63 and 32% for suture and mesh repair, respectively [6]. In the light of these results, incisional hernia prevention is warranted. In this investigation, it is shown that a significant reduction (from 14.5 to 1.7%) of incisional hernia incidence was achieved by using a transverse incision. Hence, a transverse incision should be considered as the preferred incision in acute and elective surgery of the upper abdomen in which laparoscopic surgery is not an option. Full exposure of two quadrants is feasible through the use of a unilateral transverse incision in, for example, biliary, bariatric, liver and colonic surgery. The transverse incision should be part of the abdominal surgeon's armamentarium and is a preferable incision to prevent the high incidence of incisional hernia after abdominal surgery.\n\n**Question:** Compared to Midline incision what was the result of Transverse incision on Complications after surgery?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
472
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomized controlled trial of clopidogrel to prevent primary arteriovenous fistula failure in hemodialysis patients\n\n ABSTRACT:\nThe optimal vascular access for chronic maintenance hemodialysis is the arteriovenous fistula (AVF). Several studies suggest a role for antiplatelet agents in the prevention of primary AVF failure. A double-blind, randomized trial was conducted to assess the efficacy and safety of clopidogrel in hemodialysis patients. Ninety three patients were randomized to receive 75 mg/daily of clopidogrel or placebo. The treatment was initiated 7–10 days prior to scheduled access surgery and continued up to six weeks postoperatively, and then patients were monitored for six months. The primary outcome was AVF failure eight weeks after fistula creation. With a permuted block randomization schedule, 46 patients received clopidogrel and 47 patients received control placebo. The primary AVF failures at two months were 21.6% in placebo group and 5.2% in clopidogrel group (P = 0.03). The hazard ratio for the incidence of primary AVF failure was 0.72 (CI 95%, 0.41–1.01). Analysis of covariables indicated that this effect occurred principally as a result of clopidogrel administration. First hemodialysis from newly created AVF in clopidogrel group was significantly more successful than placebo group (P = 0.008). No life-threatening adverse event or severe bleeding was recorded in both groups. Clopidogrel seems to be effective and safe for prevention of primary AVF failure in hemodialysis patients.\n\nBODY.INTRODUCTION:\nPerformance of a successful hemodialysis procedure requires a functional vascular access. The preferred type of access is a native fistula because they have the lowest risk of complications, lowest need for intervention, and the best long-term patency.[1] Once an arteriovenous fistula (AVF) is created, it must develop to the point that it is usable. Vascular access dysfunction is one of the most important causes of morbidity in the hemodialysis population.[2] Primary failure of native fistulas occurs as a result of either thrombosis within the first several weeks following surgical creation (early thrombosis), or inadequate maturation of the vein.[3] The primary AVF failure rate is approximately 9–50%.[4–6] Fistula evaluation 4–6 weeks after creation should be considered mandatory.[7] The clinical manifestations of early fistula failure are failure to develop adequately to permit repetitive cannulation for dialysis, inadequate flow to support dialysis, and thrombosis. The characteristic pathology that results in AVF failure is a juxta-anastomotic stenosis.[8] Whether primary AVF failure can be prevented with pharmacologic agents has not been extensively examined. Several studies have indicated that the frequency of AVF failure and loss can be reduced with antiplatelet agents.[9–18] Although those results are encouraging, they do not provide conclusive evidence of the efficacy of antiplatelet agents among patients with AVF. On the basis of these considerations, we performed a randomized, double-blind trial to test the hypothesis that clopidogrel, would prevent primary AVF failure among hemodialysis patients.\n\nBODY.MATERIALS AND METHODS:\nThe study was a randomized, double-blind trial. Patients of minimum age 18 years close to the initiation of chronic hemodialysis requiring AVF, and patients who were undergoing chronic hemodialysis but requiring a new AVF at a different site were the inclusion criteria. Exclusion criteria included patients with a history of gastrointestinal bleeding or previous bleeding episodes within six months prior to initiation of the study, patients already receiving chronic anticoagulation therapy (antiplatelet agents or warfarin), patients with terminal or life-threatening disease, pregnancy, malignant hypertension, a platelet count of <100,000/mm3, and other demonstrated medical conditions that would make antiplatelet therapy dangerous. All patients were recruited from the outpatient hemodialysis program at Jondi Shapour University, and the same surgical team placed all fistulas. Randomization was performed centrally, by the coordinating center. The randomization was stratified according to medical center with a permuted block scheme, with a block size of four and equal allocation. After identifying and obtaining consent from eligible participants, the local study coordinator telephoned the coordinating center to obtain a randomization number, which corresponded to a specific medication bottle available in the local research pharmacy. Neither the details of the randomization sequence nor the identity of the medication assignment was known to the participant or any personnel at the participating sites. Consenting eligible participants were randomized to receive either clopidogrel (75 mg/daily) or matching placebo (47 patients in placebo group and 46 patients in clopidogrel group). The treatment was initiated 7–10 days prior to scheduled access surgery and continued up to six weeks postoperatively, with full approval by the Jondi Shapour University Institutional Review Board. Patients were monitored for occurrence of complications, need for hemodialysis until six months after trial. The primary null hypothesis of the study was that clopidogrel would have no preventive effect on the incidence of primary AVF failure. The primary outcome was AVF failure eight weeks after fistula creation. Fistula failure was determined by a member of the study team (either the study coordinator or the site principal investigator), who was blinded to the treatment allocation. The fistula was classified as patent if a bruit was detectable along the vein at least 8 cm proximal to the arteriovenous anastomosis throughout systole and diastole. Secondary outcomes included adverse events and mortality. Platelet hemostatic function was measured monthly, as whole-blood bleeding time. Routine blood chemistry profiles, dialysis prescriptions, body weights, medications, and complications for all patients were recorded in a computerized database, and thus were available for inclusion in the final data analysis. We obtained detailed information on bleeding events. Assessment of the severity of bleeding episodes was performed by a panel blinded to the treatment assignments. Discontinuation of the study drug following any bleeding event was the rule.\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nData are presented as mean ± SE for continuous variables and as percentages for categorical variables. All data were tested for normality using the method of Kolmogorov-Smirnov. Statistical analyses were performed on an intention-to-treat basis. The t test was used when the means of two groups were compared. On the basis of intention-to-treat principles, all other participants for whom study medications were discontinued continued to be monitored according to the protocol. All hypothesis tests were conducted by using a significance level of 0.05 (two-sided). The statistical program SPSS version 13 (SPSS, Chicago, IL) was used to analyze the data.\n\nBODY.RESULTS:\nBetween December 2006 and March 2008, a total of 93 patients met the study criteria for enrollment. The demographic and baseline laboratory findings for each group are summarized in Table 1. There were no significant differences in covariables between groups. None of the covariables (covariables thought to influence the risk of AVF included age, gender, diabetes mellitus, bleeding time, and blood pressure) measured at baseline or follow-up times was correlated with the development of AVF failure. All patients were taking study medication 7–10 days prior to surgery. However, the medication had to be discontinued prematurely in 18 (19.4%) patients because of no intention to complete trial by six cases and complication occurring in 11 cases. One patient died before the end of trial. Two patients in clopidogrel group and three patients in placebo group were enrolled after creation of a second fistula. Prior AVFs failed because of late fistula failure in all of them. Finally, 75 patients completed trial (38 patients in clopidogrel and 37 patients in placebo groups). None of the patients was lost to follow-up. Table 1 Patients characteristics at baseline Characteristic All patients (%) Clopidogrel (%) Placebo (%) P value Age (years) 45.03 ± 1.2 44.23 ± 3.36 45.8 ± 2.84 0.47 Males 48 (51.6) 24 (25.8) 24 (25.8) 0.91 Diabetes mellitus 25 (26.9) 14 (15.1) 11 (11.8) 0.44 Patients on hemodialysis 63 (67.7) 29 (31.2) 34 (36.6) 0.33 Drug use by the end of trial 75 (80.6) 38 (40.9) 37 (39.8) 0.63 Five and three fistulae in clopidogrel-treated and placebo groups were proximal AVFs. Two patients in clopidogrel-treated group (5.26%) showed an early failure of the AVF compared to eight patients (21.62%) in placebo group (P < 0.05) [Table 2]. There was significant benefit of active treatment in the prevention of AVF failure. The hazard ratio for the incidence of primary AVF failure was 0.72 (CI 95%, 0.41–1.01). Fistula locations had no effects on the development of AVF failure in either group. During follow-up from eight weeks to six months no important complication occurred. Sixty patients required hemodialysis during this time. Hemodialysis was tried in 26 patients of clopidogrel group within six months after AVF creation and it was successful in 24 patients (92.3%). In placebo group, hemodialysis from new AVF was tried in 34 patients and it was successful in 24 cases (70.5%). First hemodialysis from newly created AVF in clopidogrel group was significantly more successful than placebo group (P = 0.008). The cumulative incidence of bleeding was similar between groups [Table 3]. No severe bleeding episode such as intracranial hemorrhage or life-threatening bleeding was recorded during active treatment period. There were no deaths attributable to bleeding in either treatment group. Bleeding times were similar at baseline for clopidogrel group (8.1 ± 0.3 min) and placebo group (8.4 ± 0.6 min) and remained stable (8.5 ± 0.4 min for clopidogrel group and 8.6 ± 0.3 min for placebo group) throughout the study period (P = 0.21). In addition, there were no differences between baseline and follow-up hematocrit values or changes in recombinant human erythropoietin doses during the study period for either group. Table 2 Clinical outcomes at the end of trial Outcome All patients (%) Clopidogrel (%) Placebo (%) P value Primary AVF failure Drug use by the end of trial 10 (26.88) 2 (5.26) 8 (21.62) 0.03 Premature cession of drug 2 (2.15) 0 (0) 2 (2.15) Table 3 Complications and mortality Complication All patients (%) Clopidogrel (%) Placebo (%) P value GI bleeding 5 (5.3) 2 (2.1) 3 (3.2) 0.31 Non GI tract bleeding 9 (9.6) 5 (5.3) 4 (4.3) 0.63 Death events 4 (4.3) 2 (2.1) 2 (2.1) 0.47 \n\nBODY.DISCUSSION:\nChronic maintenance hemodialysis requires stable and repetitive access to the intravascular compartment in order to deliver high rates of blood flow to the extracorporeal circuit. The AVF is the method of choice for the establishment of hemodialysis vascular access in patients with endstage renal disease.[1] The fistula is relatively simple to perform under local anesthesia and, when successfully established, is easy to needle and relatively free from complications. However, a significant proportion (9–50%) of fistulas fails early within three months of surgery.[4–6] An AVF with primary failure is defined as a fistula that never provided reliable hemodialysis.[18] Vascular access failure is the most common reason for hospitalization among hemodialysis patients.[19] The typical lesion of access thrombosis is neointimal vascular smooth muscle cell proliferation in the anastomotic draining vein. Platelet activation from endothelial injury may play an important role in stimulating platelet aggregators such as PDGF and thromboxane A2, in addition to directly stimulating vascular intimal proliferation.[18] Therefore, the therapeutic potential of antiplatelet agents including aspirin, sulfinpyrazone, dipyridamole, and ticlopidine were tested.[9–17] Our study was undertaken to determine the effects of clopidogrel on the incidence of primary AVF failure among newly created AVFs. We observed a significant risk reduction in the primary AVF failure in active treatment group compared to placebo group. The results of our analysis suggest that daily administration of 75 mg of clopidogrel, beginning 7–10 days prior to AVF creation, was successful in preventing the development of vascular failure with acceptable side effects. We were unable to account for the differences observed in our clinical trial on the basis of age, gender, diabetes mellitus, bleeding times, hematocrit levels, or weekly doses of recombinant human erythropoietin. This finding suggests that the risk reduction in vascular failure might be attributed to clopidogrel administration. Our results are supported by recent Cochrane report.[20] This meta-analysis confirmed the beneficial effect of antiplatelet treatment as an adjuvant to increase the patency of AVFs in the short term. However, there have been multiple studies showing variable results of antiplatelet agents on vascular access failure. Yevzlin et al., showed a negative association between antiplatelet therapy and access patency.[21] In this trial, usage of some drugs was not associated with significant risk reduction in access failure. Moreover, antiplatelet therapy in patients with access failure was associated with significantly increased risk of access failure. Kaufman et al., demonstrated no change in the risk of graft thrombosis with aspirin plus clopidogrel therapy.[22] They also noted that in chronic hemodialysis patients there is a trend toward increased thrombosis with aspirin therapy. Also Kooistra et al., were unable to demonstrate a benefit with low-dose aspirin on thrombovascular events in 68 hemodialysis patients.[23] In contrast, combining all the studies of antiplatelet agents in patients with new primary fistulae in which there was a placebo control group, the thrombosis rate in the control group was significantly higher than active treatment group.[20] Three trials compared ticlopidine with placebo with a total number of 312 participants undergoing AVF formation or graft interposition. All three trials comparing ticlopidine with placebo favored treatment in both AVF and vascular grafts. In the Fiskerstrand study, two out of six patients in the ticlopidine group compared with five out of nine in the placebo group, developed fistulae thromboses at one month (OR = 0.40, CI 95%, 0.05– 3.42).[15] In the earlier Grontoft study, only two out of 19 who received treatment developed fistulae thromboses compared to eight out of 17 on placebo (OR = 0.13, CI 95%, 0.02–0.76).[13] In Gontoft 1998, 16 out of 130 patients who received ticlopidine developed thromboses in the fistulae compared with 25 out of 131 in the placebo group (OR = 0.60, CI 95%, 0.30–1.18).[14] The overall result of the meta-analysis also favored treatment (OR = 0.47, CI 95%, 0.26–0.85). The overall P-value was 0.01.[20] We also assessed the effect of clopidogrel on the successful initiation of hemodialysis via AVF. The rate of the performing successful first hemodialysis via AVF was greater in clopidogrel group. The overall incidence of bleeding events was 15% in our study. According to Kaufman et al., we expected to encounter with 16 episodes of bleeding during six months in our patients.[22] However, the incidence of bleeding episodes was lower than expected. This finding might be related to restrictive exclusion criteria. This study is the first carefully monitored trial that was limited to AVF. It is a well-known fact that AVFs have lower thrombosis rates compared with AVF grafts. In multiple studies, AVFs have been shown to have significantly improved patency rates and lower complication and infection rates. Some studies pointed out that antiplatelet therapy might be more effective in fistulas than in arteriovenous grafts.[1422] We studied this hypotheses for first time. Our results suggested that clopidogrel is an effective therapy in prevention of primary AVF failure. The major limitation of our study is the small number of patients, but in view of the promising results we believe that our preliminary findings deserved prompt communication. However, the data must be interpreted with caution because the pharmacological approach to prevent vascular access thrombosis in hemodialysis is still in its infancy. Overall, the effect of antiplatelet agents on vascular access patency needs further investigation. A prospective randomized controlled trial with larger number of patients is warranted. Currently the National Institutes of Health sponsored Dialysis Access Consortium is conducting an ongoing double-blind multicenter randomized evaluation of clopidogrel in AVF patency.[24]\n\nBODY.CONCLUSION:\nPrimary AVF failure remains a major problem for hemodialysis patients. Vascular access thrombosis prophylaxis needs to start early in the ESRD patient. Clopidogrel, beginning 7–10 days prior to AVF creation and continuing for six weeks, seems to prevent primary AVF failure with acceptable side effects in selected hemodialysis patients.\n\n**Question:** Compared to Placebo what was the result of Clopidogrel on Complications and mortality?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
344
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 160/9 μg on Time to first COPD exacerbation?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
474
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomized controlled trial of clopidogrel to prevent primary arteriovenous fistula failure in hemodialysis patients\n\n ABSTRACT:\nThe optimal vascular access for chronic maintenance hemodialysis is the arteriovenous fistula (AVF). Several studies suggest a role for antiplatelet agents in the prevention of primary AVF failure. A double-blind, randomized trial was conducted to assess the efficacy and safety of clopidogrel in hemodialysis patients. Ninety three patients were randomized to receive 75 mg/daily of clopidogrel or placebo. The treatment was initiated 7–10 days prior to scheduled access surgery and continued up to six weeks postoperatively, and then patients were monitored for six months. The primary outcome was AVF failure eight weeks after fistula creation. With a permuted block randomization schedule, 46 patients received clopidogrel and 47 patients received control placebo. The primary AVF failures at two months were 21.6% in placebo group and 5.2% in clopidogrel group (P = 0.03). The hazard ratio for the incidence of primary AVF failure was 0.72 (CI 95%, 0.41–1.01). Analysis of covariables indicated that this effect occurred principally as a result of clopidogrel administration. First hemodialysis from newly created AVF in clopidogrel group was significantly more successful than placebo group (P = 0.008). No life-threatening adverse event or severe bleeding was recorded in both groups. Clopidogrel seems to be effective and safe for prevention of primary AVF failure in hemodialysis patients.\n\nBODY.INTRODUCTION:\nPerformance of a successful hemodialysis procedure requires a functional vascular access. The preferred type of access is a native fistula because they have the lowest risk of complications, lowest need for intervention, and the best long-term patency.[1] Once an arteriovenous fistula (AVF) is created, it must develop to the point that it is usable. Vascular access dysfunction is one of the most important causes of morbidity in the hemodialysis population.[2] Primary failure of native fistulas occurs as a result of either thrombosis within the first several weeks following surgical creation (early thrombosis), or inadequate maturation of the vein.[3] The primary AVF failure rate is approximately 9–50%.[4–6] Fistula evaluation 4–6 weeks after creation should be considered mandatory.[7] The clinical manifestations of early fistula failure are failure to develop adequately to permit repetitive cannulation for dialysis, inadequate flow to support dialysis, and thrombosis. The characteristic pathology that results in AVF failure is a juxta-anastomotic stenosis.[8] Whether primary AVF failure can be prevented with pharmacologic agents has not been extensively examined. Several studies have indicated that the frequency of AVF failure and loss can be reduced with antiplatelet agents.[9–18] Although those results are encouraging, they do not provide conclusive evidence of the efficacy of antiplatelet agents among patients with AVF. On the basis of these considerations, we performed a randomized, double-blind trial to test the hypothesis that clopidogrel, would prevent primary AVF failure among hemodialysis patients.\n\nBODY.MATERIALS AND METHODS:\nThe study was a randomized, double-blind trial. Patients of minimum age 18 years close to the initiation of chronic hemodialysis requiring AVF, and patients who were undergoing chronic hemodialysis but requiring a new AVF at a different site were the inclusion criteria. Exclusion criteria included patients with a history of gastrointestinal bleeding or previous bleeding episodes within six months prior to initiation of the study, patients already receiving chronic anticoagulation therapy (antiplatelet agents or warfarin), patients with terminal or life-threatening disease, pregnancy, malignant hypertension, a platelet count of <100,000/mm3, and other demonstrated medical conditions that would make antiplatelet therapy dangerous. All patients were recruited from the outpatient hemodialysis program at Jondi Shapour University, and the same surgical team placed all fistulas. Randomization was performed centrally, by the coordinating center. The randomization was stratified according to medical center with a permuted block scheme, with a block size of four and equal allocation. After identifying and obtaining consent from eligible participants, the local study coordinator telephoned the coordinating center to obtain a randomization number, which corresponded to a specific medication bottle available in the local research pharmacy. Neither the details of the randomization sequence nor the identity of the medication assignment was known to the participant or any personnel at the participating sites. Consenting eligible participants were randomized to receive either clopidogrel (75 mg/daily) or matching placebo (47 patients in placebo group and 46 patients in clopidogrel group). The treatment was initiated 7–10 days prior to scheduled access surgery and continued up to six weeks postoperatively, with full approval by the Jondi Shapour University Institutional Review Board. Patients were monitored for occurrence of complications, need for hemodialysis until six months after trial. The primary null hypothesis of the study was that clopidogrel would have no preventive effect on the incidence of primary AVF failure. The primary outcome was AVF failure eight weeks after fistula creation. Fistula failure was determined by a member of the study team (either the study coordinator or the site principal investigator), who was blinded to the treatment allocation. The fistula was classified as patent if a bruit was detectable along the vein at least 8 cm proximal to the arteriovenous anastomosis throughout systole and diastole. Secondary outcomes included adverse events and mortality. Platelet hemostatic function was measured monthly, as whole-blood bleeding time. Routine blood chemistry profiles, dialysis prescriptions, body weights, medications, and complications for all patients were recorded in a computerized database, and thus were available for inclusion in the final data analysis. We obtained detailed information on bleeding events. Assessment of the severity of bleeding episodes was performed by a panel blinded to the treatment assignments. Discontinuation of the study drug following any bleeding event was the rule.\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nData are presented as mean ± SE for continuous variables and as percentages for categorical variables. All data were tested for normality using the method of Kolmogorov-Smirnov. Statistical analyses were performed on an intention-to-treat basis. The t test was used when the means of two groups were compared. On the basis of intention-to-treat principles, all other participants for whom study medications were discontinued continued to be monitored according to the protocol. All hypothesis tests were conducted by using a significance level of 0.05 (two-sided). The statistical program SPSS version 13 (SPSS, Chicago, IL) was used to analyze the data.\n\nBODY.RESULTS:\nBetween December 2006 and March 2008, a total of 93 patients met the study criteria for enrollment. The demographic and baseline laboratory findings for each group are summarized in Table 1. There were no significant differences in covariables between groups. None of the covariables (covariables thought to influence the risk of AVF included age, gender, diabetes mellitus, bleeding time, and blood pressure) measured at baseline or follow-up times was correlated with the development of AVF failure. All patients were taking study medication 7–10 days prior to surgery. However, the medication had to be discontinued prematurely in 18 (19.4%) patients because of no intention to complete trial by six cases and complication occurring in 11 cases. One patient died before the end of trial. Two patients in clopidogrel group and three patients in placebo group were enrolled after creation of a second fistula. Prior AVFs failed because of late fistula failure in all of them. Finally, 75 patients completed trial (38 patients in clopidogrel and 37 patients in placebo groups). None of the patients was lost to follow-up. Table 1 Patients characteristics at baseline Characteristic All patients (%) Clopidogrel (%) Placebo (%) P value Age (years) 45.03 ± 1.2 44.23 ± 3.36 45.8 ± 2.84 0.47 Males 48 (51.6) 24 (25.8) 24 (25.8) 0.91 Diabetes mellitus 25 (26.9) 14 (15.1) 11 (11.8) 0.44 Patients on hemodialysis 63 (67.7) 29 (31.2) 34 (36.6) 0.33 Drug use by the end of trial 75 (80.6) 38 (40.9) 37 (39.8) 0.63 Five and three fistulae in clopidogrel-treated and placebo groups were proximal AVFs. Two patients in clopidogrel-treated group (5.26%) showed an early failure of the AVF compared to eight patients (21.62%) in placebo group (P < 0.05) [Table 2]. There was significant benefit of active treatment in the prevention of AVF failure. The hazard ratio for the incidence of primary AVF failure was 0.72 (CI 95%, 0.41–1.01). Fistula locations had no effects on the development of AVF failure in either group. During follow-up from eight weeks to six months no important complication occurred. Sixty patients required hemodialysis during this time. Hemodialysis was tried in 26 patients of clopidogrel group within six months after AVF creation and it was successful in 24 patients (92.3%). In placebo group, hemodialysis from new AVF was tried in 34 patients and it was successful in 24 cases (70.5%). First hemodialysis from newly created AVF in clopidogrel group was significantly more successful than placebo group (P = 0.008). The cumulative incidence of bleeding was similar between groups [Table 3]. No severe bleeding episode such as intracranial hemorrhage or life-threatening bleeding was recorded during active treatment period. There were no deaths attributable to bleeding in either treatment group. Bleeding times were similar at baseline for clopidogrel group (8.1 ± 0.3 min) and placebo group (8.4 ± 0.6 min) and remained stable (8.5 ± 0.4 min for clopidogrel group and 8.6 ± 0.3 min for placebo group) throughout the study period (P = 0.21). In addition, there were no differences between baseline and follow-up hematocrit values or changes in recombinant human erythropoietin doses during the study period for either group. Table 2 Clinical outcomes at the end of trial Outcome All patients (%) Clopidogrel (%) Placebo (%) P value Primary AVF failure Drug use by the end of trial 10 (26.88) 2 (5.26) 8 (21.62) 0.03 Premature cession of drug 2 (2.15) 0 (0) 2 (2.15) Table 3 Complications and mortality Complication All patients (%) Clopidogrel (%) Placebo (%) P value GI bleeding 5 (5.3) 2 (2.1) 3 (3.2) 0.31 Non GI tract bleeding 9 (9.6) 5 (5.3) 4 (4.3) 0.63 Death events 4 (4.3) 2 (2.1) 2 (2.1) 0.47 \n\nBODY.DISCUSSION:\nChronic maintenance hemodialysis requires stable and repetitive access to the intravascular compartment in order to deliver high rates of blood flow to the extracorporeal circuit. The AVF is the method of choice for the establishment of hemodialysis vascular access in patients with endstage renal disease.[1] The fistula is relatively simple to perform under local anesthesia and, when successfully established, is easy to needle and relatively free from complications. However, a significant proportion (9–50%) of fistulas fails early within three months of surgery.[4–6] An AVF with primary failure is defined as a fistula that never provided reliable hemodialysis.[18] Vascular access failure is the most common reason for hospitalization among hemodialysis patients.[19] The typical lesion of access thrombosis is neointimal vascular smooth muscle cell proliferation in the anastomotic draining vein. Platelet activation from endothelial injury may play an important role in stimulating platelet aggregators such as PDGF and thromboxane A2, in addition to directly stimulating vascular intimal proliferation.[18] Therefore, the therapeutic potential of antiplatelet agents including aspirin, sulfinpyrazone, dipyridamole, and ticlopidine were tested.[9–17] Our study was undertaken to determine the effects of clopidogrel on the incidence of primary AVF failure among newly created AVFs. We observed a significant risk reduction in the primary AVF failure in active treatment group compared to placebo group. The results of our analysis suggest that daily administration of 75 mg of clopidogrel, beginning 7–10 days prior to AVF creation, was successful in preventing the development of vascular failure with acceptable side effects. We were unable to account for the differences observed in our clinical trial on the basis of age, gender, diabetes mellitus, bleeding times, hematocrit levels, or weekly doses of recombinant human erythropoietin. This finding suggests that the risk reduction in vascular failure might be attributed to clopidogrel administration. Our results are supported by recent Cochrane report.[20] This meta-analysis confirmed the beneficial effect of antiplatelet treatment as an adjuvant to increase the patency of AVFs in the short term. However, there have been multiple studies showing variable results of antiplatelet agents on vascular access failure. Yevzlin et al., showed a negative association between antiplatelet therapy and access patency.[21] In this trial, usage of some drugs was not associated with significant risk reduction in access failure. Moreover, antiplatelet therapy in patients with access failure was associated with significantly increased risk of access failure. Kaufman et al., demonstrated no change in the risk of graft thrombosis with aspirin plus clopidogrel therapy.[22] They also noted that in chronic hemodialysis patients there is a trend toward increased thrombosis with aspirin therapy. Also Kooistra et al., were unable to demonstrate a benefit with low-dose aspirin on thrombovascular events in 68 hemodialysis patients.[23] In contrast, combining all the studies of antiplatelet agents in patients with new primary fistulae in which there was a placebo control group, the thrombosis rate in the control group was significantly higher than active treatment group.[20] Three trials compared ticlopidine with placebo with a total number of 312 participants undergoing AVF formation or graft interposition. All three trials comparing ticlopidine with placebo favored treatment in both AVF and vascular grafts. In the Fiskerstrand study, two out of six patients in the ticlopidine group compared with five out of nine in the placebo group, developed fistulae thromboses at one month (OR = 0.40, CI 95%, 0.05– 3.42).[15] In the earlier Grontoft study, only two out of 19 who received treatment developed fistulae thromboses compared to eight out of 17 on placebo (OR = 0.13, CI 95%, 0.02–0.76).[13] In Gontoft 1998, 16 out of 130 patients who received ticlopidine developed thromboses in the fistulae compared with 25 out of 131 in the placebo group (OR = 0.60, CI 95%, 0.30–1.18).[14] The overall result of the meta-analysis also favored treatment (OR = 0.47, CI 95%, 0.26–0.85). The overall P-value was 0.01.[20] We also assessed the effect of clopidogrel on the successful initiation of hemodialysis via AVF. The rate of the performing successful first hemodialysis via AVF was greater in clopidogrel group. The overall incidence of bleeding events was 15% in our study. According to Kaufman et al., we expected to encounter with 16 episodes of bleeding during six months in our patients.[22] However, the incidence of bleeding episodes was lower than expected. This finding might be related to restrictive exclusion criteria. This study is the first carefully monitored trial that was limited to AVF. It is a well-known fact that AVFs have lower thrombosis rates compared with AVF grafts. In multiple studies, AVFs have been shown to have significantly improved patency rates and lower complication and infection rates. Some studies pointed out that antiplatelet therapy might be more effective in fistulas than in arteriovenous grafts.[1422] We studied this hypotheses for first time. Our results suggested that clopidogrel is an effective therapy in prevention of primary AVF failure. The major limitation of our study is the small number of patients, but in view of the promising results we believe that our preliminary findings deserved prompt communication. However, the data must be interpreted with caution because the pharmacological approach to prevent vascular access thrombosis in hemodialysis is still in its infancy. Overall, the effect of antiplatelet agents on vascular access patency needs further investigation. A prospective randomized controlled trial with larger number of patients is warranted. Currently the National Institutes of Health sponsored Dialysis Access Consortium is conducting an ongoing double-blind multicenter randomized evaluation of clopidogrel in AVF patency.[24]\n\nBODY.CONCLUSION:\nPrimary AVF failure remains a major problem for hemodialysis patients. Vascular access thrombosis prophylaxis needs to start early in the ESRD patient. Clopidogrel, beginning 7–10 days prior to AVF creation and continuing for six weeks, seems to prevent primary AVF failure with acceptable side effects in selected hemodialysis patients.\n\n**Question:** Compared to Placebo what was the result of Clopidogrel on The primary arteriovenous fistula failures at two months?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
315
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 320/9 μg on 1-hour post-dose FEV1 improvements?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
480
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Improved outcomes from the administration of progesterone for patients with acute severe traumatic brain injury: a randomized controlled trial\n\n ABSTRACT.BACKGROUND:\nSevere traumatic brain injury (TBI) has been increasing with greater incidence of injuries from traffic or sporting accidents. Although there are a number of animal models of TBI using progesterone for head injury, the effects of progesterone on neurologic outcome of acute TBI patients remain unclear. The aim of the present clinical study was to assess the longer-term efficacy of progesterone on the improvement in neurologic outcome of patients with acute severe TBI.\n\nABSTRACT.METHODS:\nA total of 159 patients who arrived within 8 hours of injury with a Glasgow Coma Score ≤ 8 were enrolled in the study. A prospective, randomized, placebo-controlled trial of progesterone was conducted in the Neurotrauma Center of our teaching hospital. The patients were randomized to receive either progesterone or placebo. The primary endpoint was the Glasgow Outcome Scale score 3 months after brain injury. Secondary efficacy endpoints included the modified Functional Independence Measure score and mortality. In a follow-up protocol at 6 months, the Glasgow Outcome Scale and the modified Functional Independence Measure scores were again determined.\n\nABSTRACT.RESULTS:\nOf the 159 patients randomized, 82 received progesterone and 77 received placebo. The demographic characteristics, the mechanism of injury, and the time of treatment were compared for the two groups. After 3 months and 6 months of treatment, the dichotomized Glasgow Outcome Scale score analysis exhibited more favorable outcomes among the patients who were given progesterone compared with the control individuals (P = 0.034 and P = 0.048, respectively). The modified Functional Independence Measure scores in the progesterone group were higher than those in the placebo group at both 3-month and 6-month follow-up (P < 0.05 and P < 0.01). The mortality rate of the progesterone group was significantly lower than that of the placebo group at 6-month follow-up (P < 0.05). The mean intracranial pressure values 72 hours and 7 days after injury were lower in the progesterone group than in the placebo group, but there was no statistical significance between the two groups (P > 0.05). Instances of complications and adverse events associated with the administration of progesterone were not found.\n\nABSTRACT.CONCLUSION:\nOur data suggest that acute severe TBI patients with administration of progesterone hold improved neurologic outcomes for up to 6 months. These results provide information important for further large and multicenter clinical trials on progesterone as a promising neuroprotective drug.\n\nABSTRACT.TRIAL REGISTRATION:\nACTRN12607000545460.\n\nBODY.INTRODUCTION:\nTraumatic brain injury (TBI) remains one of the leading causes of injury-related death and severe disability. The management of TBI currently includes preventing further neurological insults, managing the intracranial pressure (ICP), and surgical procedures. It is very important to search for clinically effective neuroprotective drugs to prevent secondary brain injury after TBI. In spite of many neuroprotective agents showing efficacy in experimental models of TBI, none has produced significant neuronal protection when tested in clinical trials [1,2]. Progesterone, a hormone, has steroidal, neuroactive and neurosteroidal action in the center neuronal system. Neuroprotective effects of progesterone have recently been shown in a variety of animal models, including ischemic and traumatic brain insult models [3-6]. Postinjury administration of progesterone in experimental models of head injury confers significant protection against TBI-induced cerebral edema and secondary neuronal death, promoting behavioral recovery [7,8]. Experimental evidence suggests that postinjury treatment with progesterone decreases brain edema, attenuates free radical damage, and reduces neuronal loss in TBI animal models [8-13]. Progesterone also reduces the inflammatory response and attenuates neurological abnormalities after ischemia and spinal cord injury [14-18]. In a recently published controlled study of progesterone, Wright and colleagues conducted a phase II, randomized, double-blind, placebo-controlled trial to assess the safety and benefit of administering progesterone to patients with acute TBI [19]. No serious adverse events were found in the 77 patients who received progesterone, and the patients with moderate TBI who received progesterone were more likely to have a moderate to good outcome than those were randomized to placebo at 30 days post injury. The 30-day mortality in the progesterone group was less than one-half that of the control group. This outcome suggests that progesterone causes no harms and may be a beneficial treatment for TBI [19]. Despite these potential advantages and the good safety profile of progesterone described in studies utilizing animals or humans as subjects, there is relatively little information available from assessing neuroprotective properties of progesterone in the patients with acute severe brain trauma. The effects of progesterone on neurological outcome of the TBI patients remain unclear. The purpose of the present pilot clinical study was to assess the longer-term efficacy of progesterone on improving the neurological outcome of patients with acute severe TBI.\n\nBODY.MATERIALS AND METHODS.PATIENTS:\nPatients with acute severe TBI and a Glasgow Coma Scale (GCS) score ≤ 8 after resuscitation and stabilization were entered into the study. Two hundred and thirty patients from the Neurotrauma Center of our teaching hospital were included. Male or female patients between the ages of 18 and 65 years were studied. The patients received progesterone within 8 hours after the documented time of injury. All patients admitted to the Neurotrauma Center, Clinical Medical College of Hangzhou between March 2004 and February 2007 were consecutively eligible for enrollment. We excluded patients who had received any investigational drugs 30 days prior to the enrollment, such as progesterone, estrogen and investigational compound, patients with severe anoxic intracerebral damage or brain death, and patients whose clinical condition was unstable (partial pressure of oxygen < 60 mmHg or a systolic blood pressure < 90 mmHg, or both). We also excluded pregnant patients and lactating female patients, and those for whom there was doubt whether the neurological status resulted from head trauma or acute or chronic spinal cord injury. The study was conducted in compliance with the clinical protocol approved by the Institutional Review Board and the ethical committees of Clinical Medical College of Hangzhou, according to Good Clinical Practice standards. Because of the nature of patients' injuries, subjects in this clinical study were incapable of granting informed consent. Investigators therefore obtained informed consent from the subject's legal guardian or health proxy before administering the drug. Given the urgent circumstances, we were unable to obtain permission from a legal guardian or health proxy within the stipulated time window for some patients (n = 53). Investigators therefore sought approval from the Institutional Review Board to use deferred consent. If the Institutional Review Board determined that these regulatory criteria were satisfied, the investigators were permitted to enroll subjects without consent. When the drug was administered without proxy consent, the Institutional Review Board was notified within 2 working days. We continued to try to contact the proxy consent after drug administration, and documented those attempts to the Institutional Review Board. Once contacted, the family or legally authorized representative was notified of the patient's enrollment and asked to provide written approval for the patient's continued participation. If attempts to contact proxy consent were unsuccessful, or if the patient died before the family could be contacted, we notified the Institutional Review Board and placed a full report in the patient's record and study file.\n\nBODY.MATERIALS AND METHODS.STANDARD CLINICAL MANAGEMENT:\nAfter head computerized tomography scanning, the patients were delivered to the neurosurgical intensive care unit of the teaching hospital immediately or following surgical evacuation of an intracranial hematoma. All patients received the standard treatment for management of severe TBI based on the guidelines for the management of severe head injury of the American Association of Neurologic Surgeons [20]. Particular emphasis was placed on the prevention and treatment of secondary insults, the avoidance of intracranial hypertension, maintenance of a normovolemic state as well as normothermia and normoglycemia, with ventilation to maintain the oxygen pressure at a minimum of 100 mmHg and the carbon dioxide pressure at approximately 35 mmHg.\n\nBODY.MATERIALS AND METHODS.RANDOMIZATION AND MEDICATION ADMINISTRATION:\nThe prospective, randomized, placebo-controlled, double-blind study was conducted in our neurosurgical intensive care unit. Subjects enrolled in the study were randomized to receive either progesterone (Tianjing Jinyao Pharmaceutical Co. Ltd, Tianjing, China) or matching placebo within 8 hours of the documented time of injury. Qualifying patients were randomly assigned in a 1:1 manner to receive the matching treatment with random numbers. Patients for the treatment group were given progesterone at 1.0 mg/kg via intramuscular injection and then once per 12 hours for 5 consecutive days. A single-dosage volume equivalent to 0.05 ml/kg was used in each subject. In a double-blind manner, progesterone and placebo were supplied via identical-looking solutions in identical glass vials with or without progesterone. The appearance, packaging and administration of placebo and progesterone injections were the same for the two groups. All patients, treating physicians, nursing staff, and pharmacists were blinded throughout the study period.\n\nBODY.MATERIALS AND METHODS.CLINICAL MEASUREMENTS:\nThe ICP was monitored continuously using ICP monitoring apparatus (CAMINO. MPM-1; Integra Co., San Diego, CA, USA). A computerized tomography scan was obtained in all patients at admission and this was categorized according to the modified Marshall computerized tomography scan classification: I, intracranial pathology not visible on the computerized tomography scan; II, cisterns present with shift ≤ 5 mm; lesions present, but no high-density or mixed-density lesions > 25 cm3, with bone fragments and foreign bodies; III, cisterns compressed or absent, shift ≤ 5 mm, with no high-density or mixed-density lesions > 25 cm3; IV, shift > 5 mm, with no high-density or mixed-density lesions >25 cm3; V, any surgically evacuated lesion; and VI, high-density or mixed-density lesions >25 cm3 without surgical evacuation. The patient's condition – body temperature, heart rate and respiratory rate, blood pressure, and pulse blood oxygen saturation – was monitored continuously at the bedside with monitoring apparatus (Hewlett-Packard, Palo Alto, CA, USA). Daily evaluations of neurologic status over the initial 14-day period were performed via the GCS score, adverse experiences, surgical procedures, and intracranial complications. Intake and output of fluids were also recorded. Laboratory tests including hematology, the coagulation profile and clinical chemistry were performed daily and then for 1 week after injury. A urine pregnancy test was performed at enrollment for female patients (as necessary).\n\nBODY.MATERIALS AND METHODS.NEUROLOGIC OUTCOME MEASUREMENTS:\nThe neurologic outcome was evaluated according to the Glasgow Outcome Scale (GOS) score, which contains five levels of outcome: good recovery, moderate disability, severe disability, vegetative survival, or death. For statistical analysis, GOS scores were dichotomized into favorable or unfavorable outcomes. Patients in the upper two GOS outcome groups (good recovery and moderate disability) were considered of favorable outcome, and patients in the other groups (severe disability, vegetative state, or death) were considered of unfavorable outcome. Secondary efficacy endpoints were the modified Functional Independence Measure (FIM) score and mortality. Based on previous reports [21,22], the modified FIM measurements of disability in three areas of activity (domains of self-care, motor function, and cognitive function) were chosen from the 18 items in the full FIM. Each of three items (expression, feeding, and locomotion) includes four possible levels of function ranging from total dependence (1) to independence (4). The total modified FIM scores therefore ranged from 3 to 12. The patients were assessed using the same measures both at 3 and 6 months in the follow-up protocol.\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nDescriptive statistics, including proportions, means and standard deviations, were compiled for all demographic and outcome measures. Demographic and clinical data were analyzed using Fisher's exact test. The statistical analyses were conducted to assess the differences between the treatment group and the control group on specific variables. Statistical analysis was performed using contingency analysis (chi-squared) for categorical data and Student's t test for continuous data. P < 0.05 was considered statistically significant. SPSS 11.0 software package (SPSS Inc., Chicago IL, USA) was used for statistical analysis.\n\nBODY.RESULTS.PATIENTS:\nBetween March 2004 and February 2007, a total of 230 patients were screened in the present study. Of these, 159 patients meeting the protocol stipulation and condition were recruited and randomized to receive either progesterone (n = 82) or placebo (n = 77). Data were available for 154 patients (96%) at the 3-month follow-up and for 135 patients (84%) at the 6-month follow-up. Nineteen patients (11%) were lost to follow-up, three patients (1%) refused follow-up, and two patients (1%) withdrew from the trial. No subjects were enrolled in violation of the protocol stipulations (Figure 1). Figure 1Trial profile. The demographics of the progesterone and placebo groups are presented in Table 1. The cohorts were well balanced with no significant differences between the two groups. The medication history of patients, medication administration, and medical procedures were not significantly different among treatment groups. Table 1 Clinical and demographic characteristics between the two groups Admission characteristic Placebo (n = 77) Progesterone (n = 82) P value Males 57 (74) 58 (70) 0.64 Females 20 (25) 24 (29) 0.64 Mean (standard deviation) age (years) 31 (9) 30 (11) 0.52 Mean (standard deviation) time injury to administration (hours) 3.65 (1.46) 3.80 (2.03) 0.59 Mean (standard deviation) qualifying Glasgow Coma Scale score 6.1 (1.3) 6.0 (1.8) 0.68 Glasgow Coma Scale 3 to 5 20 (25) 22 (26) 0.90 Mechanism of injury Motor vehicle 62 (80) 63 (76) 0.57 Fall 8 (10) 10 (12) 0.71 Assault 4 (5) 7 (8) 0.40 Other 3 (3) 2 (2) 0.59 Surgical procedures 24 (31) 22 (26) 0.54 Pupillary response Bilaterally normal 26 (33) 26 (31) 0.78 Abnormal 51 (66) 56 (68) 0.78 Marshall computerized tomography scan classification I 0 0 II 9 (11) 7 (8) 0.50 III 22 (28) 28 (34) 0.44 IV 13 (16) 12 (14) 0.69 V 24 (31) 22 (26) 0.54 VI 9 (11) 13 (15) 0.44 Data presented as n (%) unless indicated otherwise. \n\nBODY.RESULTS.GLASGOW OUTCOME SCALE SCORES:\nThe 3-month and 6-month GOS scores between the progesterone and placebo groups are summarized in Table 2. There was a better recovery rate for the patients who were given progesterone than for those in the control group at 3-month follow-up (P = 0.044). A dichotomized analysis revealed significant differences in neurologic outcome between the treatment and control groups (Figure 2). The analysis using the dichotomization of GOS scores at 3 months post injury revealed a favorable outcome in 47% of the patients receiving progesterone and in 31% of the placebo group (P = 0.034). There was an unfavorable outcome in 53% of the patients receiving progesterone and in 70% of the placebo group (P = 0.022). At 6-month follow-up, the dichotomized GOS scores also showed a significant statistical difference between the two groups, similar to those 3 months after injury. The percentage of favorable outcome was 58% for the patients who were given progesterone and was 42% in the placebo group (P = 0.048). Forty-one percent of patients who were given progesterone and 57% of the placebo group exhibited an unfavorable outcome (P = 0.048). Table 2 Comparison of Glasgow Outcome Scale scores between the progesterone and placebo groups patients at 3-month and 6-month follow-up Glasgow Outcome Scale scores Progesterone (n = 82) Placebo (n = 77) 3 months Good recovery 21 (25) 10 (12) Moderate disability 18 (21) 14 (18) Severe disability 16 (19) 13 (16) Vegetative survival 13 (15) 16 (20) Death 15 (18) 25 (32) 6 months Good recovery 26 (31) 19 (24) Moderate disability 22 (26) 14 (18) Severe disability 9 (10) 11 (14) Vegetative survival 10 (12) 8 (10) Death 15 (18) 25 (32) Data presented as n (%). Figure 2Dichotomized Glasgow Outcome Scale scores for patients receiving either progesterone or placebo. There was a remarkably more favorable outcome among patients who were given progesterone compared with patients receiving placebo (P = 0.034) 3 months postinjury. At 6-month follow-up, the significant difference in the dichotomization of Glasgow Outcome Scale scores between the progesterone and placebo groups was similar to that after three-month injury (P = 0.048). Subgroup analysis for women also showed a significant difference in the percentage of favorable outcome between the two groups at 6-month follow-up (35% in the placebo group and 66% in the progesterone group, P = 0.036). The patients who were given progesterone in the group with GCS of 6 to 8 showed a more favorable outcome (43%) compared with the placebo group (28%) at 6-month follow-up (P = 0.044). There was no significant difference, however, in dichotomized outcomes in the group with GCS of 3 to 5 (P > 0.05).\n\nBODY.RESULTS.MODIFIED FUNCTIONAL INDEPENDENCE MEASURE SCORES:\nFigure 3 shows the modified FIM scores at 3-month and 6-month follow-up. There was a significant difference in the mean modified FIM score between two groups both at 3-month and 6-month follow-up. At the 3-month follow-up, the scores were 7.35 ± 1.89 for the placebo group and 8.02 ± 1.73 for the progesterone group (P < 0.05). Six months after injury, the placebo group showed a score of 8.95 ± 1.05 and the progesterone group presented 9.87 ± 1.17 (P < 0.01), suggesting good functional outcome in the patients treated with progesterone. Figure 3Modified Functional Independence Measure scores for patients receiving either progesterone or placebo. Modified Functional Independence Measure (FIM) scores at 3-month and 6-month follow-up from patients receiving either progesterone or placebo show that the scores in the progesterone group were significantly higher than those in the placebo group at both 3-month and 6-month follow-up. Data expressed as the mean ± standard deviation. Different from the placebo group: *P < 0.05, **P < 0.01.\n\nBODY.RESULTS.MORTALITY:\nDuring the 6 months of follow-up, a total of 40 patients (25%) died in the present study (37 patients died during their hospital stay). Seventy percent of deaths occurred within 1 week after trauma. Mortality was attributed to the heavy head injury in each case. The mortality rate in the progesterone treatment group was significantly lower at 6-month follow-up compared with the placebo group (18% versus 32%, P = 0.039).\n\nBODY.RESULTS.INTRACRANIAL PRESSURE:\nFigure 4 shows the ICP in the progesterone group patients and in the placebo group patients at 24 hours, 72 hours and 7 days after injury. The ICP was monitored continuously for 75 patients (47%), 40 in the progesterone group and 35 in the placebo group. The mean ICP shows no apparent difference at 24 hours after trauma between the two groups (progesterone group, 22.1 ± 4.3 mmHg versus placebo group, 23.2 ± 4.6 mmHg; P = 0.121). At 72 hours and 7 days after injury, the mean ICP of patients who were given progesterone was slightly lower than those of patients who received placebo, but the differences were not statistically significant (16.9 ± 3.8 mmHg and 14.8 ± 3.8 mmHg for progesterone-treated patients versus 18.2 ± 5.1 mmHg and 15.9 ± 4.1 mmHg for placebo-treated patients, respectively; P > 0.05). Figure 4Comparison of intracranial pressure between patients receiving either progesterone or placebo. The mean intracranial pressure between the progesterone and placebo group patients shows no significant differences 24 hours, 72 hours and 7 days after injury between the two groups (P > 0.05). Data expressed as the mean ± standard deviation.\n\nBODY.RESULTS.GLASGOW COMA SCALE SCORES AND CLINICAL MEASUREMENTS:\nThe mean GCS scores increased progressively in the two groups during the 14-day acute phase of the study, with no apparent differences among the treatment groups. Meanwhile, there was no obvious difference in average body temperature, heart and respiratory rates, blood pressure, pulse blood oxygen saturation, and laboratory testing between the progesterone and placebo groups.\n\nBODY.RESULTS.COMPLICATIONS AND ADVERSE EVENTS:\nProgesterone was well tolerated in the treated patients with acute severe TBI. No complication and adverse event associated with the administration of progesterone was found in this clinical study during the hospitalization periods.\n\nBODY.DISCUSSION:\nThe results of the present trial showed for the first time that progesterone administration had a longer-term efficacy on clinical outcomes in acute TBI patients. A significant increase in the proportion of patients with a favorable outcome in the progesterone group compared with the placebo group up to 6 months indicates the possibility of progesterone for treatment of acute TBI. Moreover, there were more surviving TBI patients in the treatment group than in the control group. Our results suggest the efficacy of progesterone in the treatment of acute severe TBI. Previous reports showed the evidence of efficacy in TBI animal models [8-14]. In the present study, the efficacy and safety of progesterone was confirmed in patients with acute severe TBI. Furthermore, our results using the modified FIM and GOS scores showed that progesterone administration remarkably enhanced functional recovery 6 months after injury and reduced the mortality of the patients with acute severe TBI (GCS = 6 to 8), although there was no statistical significance in the outcome improvement for GCS = 3 to 5 patients with and without progesterone treatment. The evidence of improved outcome for women patients also suggested, in part, a beneficial efficacy and feasibility of progesterone in women with TBI, in spite of the limited number of female patients in the trial. It is recognized that the pathophysiology of TBI is a multifactorial process involved in a complex and interwoven series of pathologic process following the onset of insult, such as increased extracellular glutamate concentrations, increased intracellular Ca2+, free radical overproduction and exacerbated inflammatory response. Medication targeted at a pathological single injury factor could therefore not sufficiently recover functional deficits following TBI. The ideal drugs should be able to block multiple cellular events leading to brain damage following TBI. Neuroprotective strategies currently focus on acting on only one of the mechanisms. Some efforts have been made, however, to combine agents or interventions to increase the probability of success in this setting [23,24]. Nevertheless, the use of a single pharmacologic agent or procedure to slow or block damaging chemicals that are released after brain injury is highly desirable. Progesterone has several features that make it an attractive potential drug candidate for TBI. First, progesterone could protect against brain damage via multiple mechanisms[13,15-18]. The pharmacokinetics of progesterone and its pattern of adverse reactions are well known since the drug has been safely used for a long time [25,26]. Second, with a wide therapeutic window of progesterone, a single bolus given up to 24 hours post injury may significantly reduce cerebral edema [7]. Third, progesterone may rapidly cross the blood–brain barrier and reach equilibrium with the plasma within 1 hour of administration [27-29]. Administration of progesterone soon after TBI would probably benefit the recovery of the patient. In the present double-blind trial, progesterone or placebo was dissolved in the same camellia oil and taken daily for 5 days by patients with acute TBI. Those patients administered progesterone experienced significant improvements in functioning outcome, indicating neuroprotective properties of progesterone in acute severe TBI. There was no adverse event after administration of progesterone and no further late toxicity up to 6 months in the trial. Goss and colleagues suggested that low and moderate doses of progesterone (4 to 8 mg/kg) were optimal for facilitating recovery of select behaviors, and that postinjury progesterone treatment permitted a wider dose range than preinjury treatment in rats with medial frontal cortical contusions [30]. In addition, 5 days of postinjury progesterone treatment are needed to reduce significantly the neuropathological and behavioral abnormalities found in a rodent model of TBI [13]. Wright and colleagues used intravenous progesterone at a dose of 0.71 mg/kg, followed by 0.5 mg/kg progesterone per 12 hours during the 3 following days, which appeared safe in the treatment of TBI patients [19]. In our study, the patients were received a single intramuscular injection of 1.0 mg/kg progesterone and the same dose per 12 hours for 5 consecutive days. The results in our trial showed that single higher-dose progesterone as protective therapy did not lead to any serious side effects. No obvious symptoms of hormone reaction were observed in our study. Accordingly, it can be anticipated that progesterone may be a promising treatment for severe TBI patients as it is inexpensive, widely available and has a long track record of safe use in humans to treat other diseases. The data in the present study provide very encouraging and favorable conditions that could lead to the assessment of GOS and FIM scores in TBI patients in a clinical trial. The GOS score, although useful, provides only a global assessment of function and dependence; it may not differentiate specific difference in cognitive function, motor function, or daily activities. The modified FIM score selects only three items from the 18-item score, and also distinguishes only four (as opposed to seven) levels of function. Subtle or complex deficiencies, particularly in cognitive function, may not have been identified in the dataset. A deficiency in using any one scale to measure outcome is that it is limited in its scope of measurement. The present clinical study was therefore designed to evaluated functional outcome according to the GOS and the modified FIM score. Intracranial hypertension has been considered an important factor affecting the outcome of the patients with acute severe TBI. Progesterone administration showed to decrease cerebral edema [9]. In an experimental study with male rats, there was a linear correlation between the serum progesterone level and brain edema after experimental TBI. The higher the serum progesterone level, the lower the cerebral edema [31]. In the current trial, however, no statistically significant difference was found in ICP monitoring between the groups given progesterone or placebo. It seems that progesterone treatment has little effect on directly reducing the ICP of patients with acute severe TBI. As a result of randomization, all of these parameters were homogeneous between the progesterone and placebo groups in our clinical trial. Nevertheless, some limitations are observed in the current study. The results could be influenced by a single-center trial and local perioperative standard of care. Therefore, it is necessary to use a sufficient power to assess progesterone's effects on neurologic outcomes. Our result of the significant differences in outcomes between two groups of patients emphasizes the potential value of using GOS and FIM to tailor progesterone administration and the likelihood of observing similar differences in a larger patient population; however, the possibility exists that a statistical error may have occurred because of an inadequate sample size. Further studies are needed to determine the mechanisms of action underlying the neurologic effect observed.\n\nBODY.CONCLUSION:\nThe present pilot study indicated that the use of progesterone may significantly improve neurologic outcome of patients suffering severe TBI up to 6 months after injury, providing a potential benefit to the treatment of acute severe TBI patients. Our results strongly support further large, multicenter clinical trials to examine the ways in which progesterone is achieving the profound neurologic effect and to decipher optimal conditions in which it can be used to lengthen the duration of and improve the degree of neuroprotection.\n\nBODY.KEY MESSAGES:\n• Many neuroprotective agents have been shown to be efficient on TBI in animal models, and there is no single agent that shows improvement in outcome for head injury patients. • A number of experimental models have suggested that administration of progesterone has a potential benefit for head injury. • The present clinical trial reveals that progesterone may be used as a potential safe drug for the treatment of acute severe head trauma patients.\n\nBODY.ABBREVIATIONS:\nFIM = Functional Independence Measure; GCS = Glasgow Coma Scale; GOS = Glasgow Outcome Scale; ICP = intracranial pressure; TBI = traumatic brain injury.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nGMX and WQY participated in the trial design and were involved in the study analysis and summary. GMX and WMW obtained the data. GMX, JW, ZHL and WMW participated in the data analysis and interpretation of the results. All authors reviewed the final version.\n\n**Question:** Compared to placebo what was the result of progesterone on The analysis using the dichotomization of GOS scores at 6 months post injury?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
324
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 320/9 μg on Mean FEV1 at 12 hours at end of treatment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
406
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Incisional hernia after upper abdominal surgery: a randomised controlled trial of midline versus transverse incision\n\n ABSTRACT.OBJECTIVES:\nTo determine whether a transverse incision is an alternative to a midline incision in terms of incisional hernia incidence, surgical site infection, postoperative pain, hospital stay and cosmetics in cholecystectomy.\n\nABSTRACT.SUMMARY BACKGROUND DATA:\nIncisional hernias after midline incision are commonly underestimated but probably complicate between 2 and 20% of all abdominal wall closures. The midline incision is the preferred incision for surgery of the upper abdomen despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia. A RCT was preformed in the pre-laparoscopic cholecystectomy era the data of which were never published.\n\nABSTRACT.METHODS:\nOne hundred and fifty female patients were randomly allocated to cholecystectomy through midline or transverse incision. Early complications, the duration to discharge and the in-hospital use of analgesics was noted. Patients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications such as fistula, wound dehiscence and incisional hernia after a minimum of 12 months follow-up.\n\nABSTRACT.RESULTS:\nTwo percent (1/60) of patients that had undergone the procedure through a transverse incision presented with an incisional hernia as opposed to 14% (9/63) of patients from the midline incision group (P = 0.017). Transverse incisions were found to be significantly shorter than midline incisions and associated with more pleasing appearance. More patients having undergone a midline incision, reported pain on day one, two and three postoperatively than patients from the transverse group. The use of analgesics did not differ between the two groups.\n\nABSTRACT.CONCLUSIONS:\nIn light of our results a transverse incision should, if possible, be considered as the preferred incision in acute and elective surgery of the upper abdomen when laparoscopic surgery is not an option.\n\nBODY.INTRODUCTION:\nThe rate of incisional hernia after midline incision is commonly underestimated but probably lies between 2 and 20% [1–5]. Thus, incisional hernia is a major postoperative problem. The treatment of incisional hernia is complicated by high rates of recurrences. Recently, in a randomised controlled trial published by Burger et al. [6], midline incisional hernia repair has been shown to be associated with a 10-year cumulative recurrence rate of 63 and 32% for suture and mesh repair, respectively. The midline incision is the preferred incision for surgery of the upper abdomen, despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia [7]. Various approaches to opening the abdomen have been advocated over time. The choice for a certain incision is dependent on the exposure necessary for the desired procedure to succeed. A midline incision, be it supraumbilical, infraumbilical or both, is an approach especially suited for emergency and exploratory surgery because of the quick and generous exposure that can be achieved within a few minutes [8, 9]. The avascular nature of the linea alba minimises blood loss during this procedure. A supraumbilical transverse incision may be utilised in case exposure of the upper abdomen is desired. During this incision, the damage inflicted to the segmental arteries and nerves is previously described as being minimal [10]. Previously, only one randomised controlled trial, comparing transverse and true midline incisions, has been published specifically addressing incisional hernia incidence [11]. To determine whether the use of a transverse incision is an alternative to a midline incision for open cholecystectomy in terms of incisional hernia incidence, surgical site infection, postoperative pain and hospital stay, this randomised controlled trial was performed. This trial was conducted in an era when laparoscopic cholecystectomy was not yet available. The possibility of low incisional hernia rates after transverse incisions and the fact that little is known about potential advantages incited us to publish the relevant results of this randomised controlled trial which has been performed in the past and has only been reported in a Dutch thesis by one of the authors (H.L.). The primary endpoint of this study was the incisional hernia incidence after 12 months of follow-up. Secondary endpoints included pain and cosmetic appearance.\n\nBODY.METHODS.PROTOCOL:\nSome 150 consecutive female patients were randomly assigned to a midline or transverse incision as an approach for elective cholecystectomy or combined cholecystectomy and cholangiography (with or without consecutive choledochotomy) (75 and 75 patients, respectively). Emergency procedures were excluded from participation. The sample size is based on an incisional hernia rate reduction from 20 to 6% at a power of 80% and an error rate of 5%. Obtaining informed consent was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975. The investigation reported was performed with informed consent from all of the patients and followed the guidelines for experimental investigation with human subjects and was approved by the medical ethics committee. An independent statistician prepared closed, tamper-proof envelopes containing the random allocation (Fig. 1). Patients were randomised for one of the procedures in theatre through the opening of the envelopes.Fig. 1Flow chart of patient inclusion and follow-up Patient-related factors that were recorded were age, body mass and length and date of operation. Operation-related factors that were recorded were the exact nature of the operation, length of the incision, the thickness of the subcutaneous fat, surgeon performing the procedure, as well as the duration of the operation (skin-to-skin time). In the immediate postoperative period, the use, dose and type of analgesics was recorded and a pain score was administered. The use of analgesics (morphine 7.5 mg intra-muscular injection, 4 h minimum interval between consecutive injections) was monitored for 48 h after surgery; the pain score was administered for the first 6 days after surgery. In patients assigned to surgery through a midline incision, the skin was incised from just below the xyphoid process to just above the umbilicus. The abdominal wall was opened in the midline by incising the linea alba. A Collin type (two-bladed) self-retaining retractor was used to maintain exposure. The abdominal wall was closed in one layer using single polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). The skin was consequently closed using running monofilament nylon sutures (Ethilon; Ethicon, Amersfoort, The Netherlands). Patients randomised for a transverse incision received a right-sided unilateral transverse incision between 3 and 4 cm below the costal margin. The rectus muscle was incised. The fibres of the external and internal obliques and the transverse muscles were separated in the direction of their course. Exposure was achieved through the use of a manually held single-bladed retractor. Closure of the abdominal wall was achieved by closure of the peritoneum and the posterior rectus fascia using a continuous, polygalactin 910 suture (Vicryl; Ethicon, Amersfoort, The Netherlands). The anterior rectus sheath and the fascia of the internal and external transverses were closed using simple interrupted polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). Towards the end of both procedures, a Redon low-vacuum drain catheter was placed, which was guided outside the abdominal cavity approximately 5 cm from the incision. The skin was consequently closed using continuous monofilament nylon suture (Ethilon; Ethicon, Amersfoort, The Netherlands). All patients received a dose of 5,000 IU of sodium–heparin on the morning of the procedure as thrombosis prophylaxis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Pearson χ2 test was used for comparing percentages. In case of small expected numbers, a Fisher's exact test was performed. Continuous variables were analysed using the Mann–Whitney test. A P-value of 0.05 or less (two-sided) was considered to be statistically significant. Means and medians are expressed ±standard deviation (SD).\n\nBODY.METHODS.FOLLOW-UP:\nPatients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications, such as fistula, wound dehiscence and incisional hernia, after a minimum of 12 months follow-up. The patient and the surgeon evaluated the cosmetic results independently and were asked to rate the scar as unsatisfactory, satisfactory or fine. Furthermore, the length and width of the scar was measured.\n\nBODY.RESULTS.STUDY GROUP:\nSome 150 consecutive patients were randomised for participation in this study during an inclusion period from April 1977 until July 1979. Seventy-five patients received a transverse incision and 75 patients a midline incision (Fig. 1). One patient was withdrawn from further follow-up after developing peritonitis and consequent acute respiratory distress syndrome (ARDS) not related to the closure of the abdominal wall 2 days after surgery (transverse incision group). The patients' average age was 51.9 and 51.4 years for the midline and the transverse incision groups, respectively. Furthermore, no differences were found in the body mass and average length between the two groups (Table 1). A cholecystectomy was performed using a transverse incision in 52 patients and utilising a midline incision in 52 patients also. Fifteen and 16 patients, respectively, underwent a combined cholangiography/cholecystectomy. A further 7 and 6 patients, respectively, were treated with a cholangiography/cholecystectomy plus additional choledochotomy and the postexploratory placement of a T-tube.Table 1Baseline characteristics of the patients undergoing surgery, according to study groupVariableMidline incisionTransverse incisionn = 75n = 74Average age (years) ± SD51.9 ± 14.851.4 ± 13.8Average weight (kg) ± SD71.3 ± 14.568 ± 14.3Average length (cm) ± SD163.5 ± 7.8164 ± 7.3\n\nBODY.RESULTS.SURGEON:\nStaff surgeons performed 17% (13/75 patients) of all procedures performed through a midline incision. The remainder of the procedures through a midline incision was carried out under staff surgeon supervision. Staff surgeons performed 14% of all procedures in the transverse incisions study group (10/74 patients) and supervised the remainder. No statistically significant difference was found between the two randomised groups (P = 0.65).\n\nBODY.RESULTS.DURATION OF SURGERY:\nNo significant difference was noted in the skin-to-skin time (in min) for the two different incisions (Table 2). Surgery utilising midline and transverse incision took 56.9 ± 29.3 and 53.2 ± 26.8 min, respectively (P = 0.35). The total duration of the procedures until extubation (in min) did not differ between the midline and transverse incisions (71.0 ± 30.5 and 67.0 ± 27.3, respectively, P = 0.34).Table 2Length of incision, thickness of subcutaneous fat and skin-to-skin time, according to study groupVariableMidline incisionTransverse incisionP-valueLength of incision (mm) ± SDa164 ± 28140 ± 24<0.0001Thickness of subcutaneous fat (mm) ± SDa34.5 ± 13.030.3 ± 12.40.05Skin-to-skin time (min) ± SDa56.9 ± 29.353.2 ± 26.80.40Width of scar (mm) ± SDb8.3 ± 1.43.3 ± 1.2<0.0001aMeasured during surgery in 75 midline and 74 transverse incisionsbMeasured at follow-up in 63 and 60 midline and transverse incisions, respectively\n\nBODY.RESULTS.PAIN AND ANALGESICS:\nSignificantly more patients, having undergone a midline incision, reported pain on day one, two and three postoperatively (P < 0.0001, Table 3). In the midline incision group, 28/75 patients required no or only one dose of analgesics; the remainder required two or more doses. Thirty-one patients operated through a transverse incision required no analgesics or only one dose; 43 patients (the remainder) required two or more. No significant difference in the use of analgesics was found between the groups (P = 0.69).Table 3Postoperatively reported pain, according to study group, shown as the number of patients reporting pain at the time points indicated (percentage), with the remainder of patients reporting no painTime point after surgeryMidline incision n = 75Transverse incision n = 74P-valuePatients reporting pain, n (%)Patients reporting pain, n (%)3–4 h68 (91)60 (81)0.09First day64 (85)39 (53)<0.0001Second day57 (76)23 (31)<0.0001Third day28 (37)9 (12)<0.0001Fourth day5 (7)3 (4)0.72Fifth day0 (0)1 (1)0.50Sixth day0 (0)1 (1)0.50\n\nBODY.RESULTS.COMPLICATIONS:\nPostoperative complications (Table 4) were seen in 16 out of 75 patients (21%) from the midline incision group and in 15% from the transverse incision group (11 patients) (P = 0.30). Briefly, one patient in each group developed cardiac complications; 8 and 6 patients developed urinary retention after the midline and transverse incisions, respectively (P = 0.59). Surgical site infections were diagnosed in 7 and 3 patients, respectively (P = 0.33).Table 4Rate of complications after surgery, according to study group, shown as the number of patients diagnosed with complications (percentage)ComplicationMidline incisionTransverse incisionP-valuen = 75 n (%)n = 75 n (%)Cardiac1 (1)1 (1)1Urinary retention8 (12)6 (8)0.59ARDS01 (1)0.50Surgical site infection7 (9)3 (4)0.33Haemorrhage1 (1)00.50Pneumonia01 (1)0.50Total17 (23)12 (16)0.30\n\nBODY.RESULTS.DISCHARGE:\nForty-five (60%) and 42 (57%) patients from the patients having undergone a midline or a transverse incision, respectively, were discharged on day 6 or 7 postoperatively. The remaining patients from each group left hospital care on day 8 or later. The duration of hospital admission did not differ between the two types of incision (P = 0.74).\n\nBODY.RESULTS.COSMETICS:\nThe width and length of all incisions was measured during the follow-up visit (Table 2). The mean width of the scar after the healing of the midline incisions was found to be 8.3 ± 1.4 mm. The mean width of the scar after the healing of the transverse incisions was measured to be 3.3 ± 1.2 mm. This observed difference is significant (P < 0.0001). The length of the incisions was 140 ± 24 mm and 164 ± 28 mm for the transverse and the midline incisions, respectively. The difference in scar length was found to be significant (P < 0.0001).\n\nBODY.FOLLOW-UP:\nEighty-one percent of all patients operated through a transverse incision were seen during the follow-up examination (n = 60). Of the patients operated through a midline incision, 63 out of 75 were seen at the outpatient clinic (84%). The patients that were lost to follow-up could either not be traced or had deceased (Fig. 1). The minimum follow-up for the evaluation of cosmetic results and hernia incidence was 12 months and the maximum was 36 months.\n\nBODY.FOLLOW-UP.INCISIONAL HERNIA:\nFrom the patients that had undergone the procedure through a transverse incision, one (1/60; 2%) presented with an incisional hernia as opposed to 9 patients from the midline incision group (9/63; 14%); 95% confidence interval (CI) 7.5–25.4%. This difference in hernia incidence is significant (P = 0.017). No significant correlation was found between the incisional hernia rate and surgical site infection (P = 0.07).\n\nBODY.FOLLOW-UP.SUBJECTIVE COSMETICS:\nPatients and surgeons alike were asked to rate the appearance of the scar during the postoperative follow-up outpatient clinic visit. Both the surgeons and the patients found the scar resulting from the transverse incision to be more cosmetically pleasing (P < 0.0001 and P = 0.03, respectively, Table 5).Table 5Number of patients and surgeons rating the cosmetics of a scar at follow-upScoreMidline incision (n = 63)Transverse incision (n = 60)Patients, n (%)Surgeons, n (%)Patients, n (%)Surgeons, n (%)Unsatisfactory6 (10)25 (40)2 (3)6 (10)Satisfactory16 (25)27 (43)9 (15)12 (20)Fine41 (65)11 (17)49 (82)42 (70)Total63636060Difference between type of incision: patients P = 0.03; surgeons P < 0.0001\n\nBODY.DISCUSSION:\nThis prospective randomised study of transverse and midline incisions for open cholecystectomy shows that a significant reduction of incisional hernia incidence can be achieved through the use of a transverse incision. Only one other study (published in 1980) reported the incidence of incisional hernia after upper abdominal midline and unilateral transverse incision in a randomised trial. No difference between the two techniques (8 and 6% incisional hernia, respectively) was found, but the relatively short follow-up of 6 months, however, may be held accountable for this finding [11]. Three retrospective studies showed rates of incisional hernia of 3.2, 5.4 and 16.5% for midline incision and 1.3, 6.7 and 13.4% for transverse incision without statistically significant differences [12–14]. The possible reason for the rather high incidence of incisional hernia in the midline incision group (14%) may lie in the use of resorbable 910 polygalactin sutures. Nevertheless, the use of the same type of resorbable suture in the closure of the transverse incisions resulted in a 2% hernia rate. There is evidence for the importance of proper technique and choice of incision as a means to reduce incisional hernia being more important than the use of suture material [7]. Furthermore, as mentioned above, it is known that the incidence of incisional hernia in the case of a midline incision lies between 2 and 20%. From our data, the NNT (numbers needed to treat) is calculated to be 8 (95% CI 5–30) and the RRR (relative risk reduction) is 88% (95% CI 23–100%). Luijendijk et al. [15] have published a hernia rate of 2% after Pfannenstiel incisions closed using 910 polygalactin, which is in agreement with our findings in the patients randomised for a transverse incision, emphasising the importance of the incision over the choice of suture material. In our study, significantly fewer patients reported pain on day 1, 2 and 3 after transverse incisions, a result that was also described by other authors [16, 17]. Greenall et al. [18] published a contradictory report (in 1980) in which no significant difference in postoperative pain was found between midline and transverse incisions. The previously mentioned study, however, only analysed 46 out of 572 patients (8%) with regard to pain, which may explain the finding. In the same way, Lacy et al. suspended visual analogue pain scoring in a study comparing midline and transverse incisions for abdominal aortic surgery. Remarkably, the two groups in our study did not differ in terms of postoperative analgesia, a finding that is also reported by Lindgren et al. [17] and Donati et al. [19]. In our study, surgeons as well as patients were significantly more satisfied with the aesthetic appearance after a transverse in comparison with a midline incision. The scars after transverse incisions were found to be significantly shorter and less wide than the midline incisions, which may account for the observed difference. A possible reason for this is that a transverse incision is executed parallel to the prevailing direction of the skin lines on the abdomen and, therefore, the tension on the wound and consequent scar is low. Cholecystectomy has come a long way since this trial. The introduction and widespread acceptance of laparoscopic technique as the treatment of choice has rendered open cholecystectomy to be an operation for exceptional, and perhaps surgically difficult, circumstances. Nowadays, the study reported is hardly feasible, yet, the results are still applicable and very relevant for other surgical procedures in the (upper) abdomen. Knowledge of the favourable results of a transverse incision may aid surgeons in their choice when finding themselves in the unfortunate position of needing conversion to open cholecystectomy. In conclusion, this investigation on transverse incisions might be helpful in reducing the incidence of incisional hernia in patients after open cholecystectomy. The midline incision is a preferred manner to achieve exposure of the abdominal cavity and is considered to be easily performed and quick. Although the midline incision is generally accepted, the incidence of incisional hernias is surprisingly high [1–5]. The choice for a particular incision should not only be based on exposure, but also on hernia incidence reduction, especially since recurrence rates after hernia repair are reported to be very high. Furthermore, the recurrence rate after incisional hernia repair is a disappointing 63 and 32% for suture and mesh repair, respectively [6]. In the light of these results, incisional hernia prevention is warranted. In this investigation, it is shown that a significant reduction (from 14.5 to 1.7%) of incisional hernia incidence was achieved by using a transverse incision. Hence, a transverse incision should be considered as the preferred incision in acute and elective surgery of the upper abdomen in which laparoscopic surgery is not an option. Full exposure of two quadrants is feasible through the use of a unilateral transverse incision in, for example, biliary, bariatric, liver and colonic surgery. The transverse incision should be part of the abdominal surgeon's armamentarium and is a preferable incision to prevent the high incidence of incisional hernia after abdominal surgery.\n\n**Question:** Compared to Midline incision what was the result of Transverse incision on Scar width?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
481
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Clinical and radiological evaluation of Trabecular Metal and the Smith–Robinson technique in anterior cervical fusion for degenerative disease: a prospective, randomized, controlled study with 2-year follow-up\n\n ABSTRACT:\nA prospective, randomized, controlled study was carried out to compare the radiological and clinical outcomes after anterior cervical decompression and fusion (ACDF) with Trabecular MetalTM (TM) to the traditional Smith–Robinson (SR) procedure with autograft. The clinical results of cervical fusion with autograft from the iliac crest are typically satisfactory, but implications from the donor site are frequently reported. Alternative materials for cervical body interfusion have shown lower fusion rates. Trabecular Metal is a porous tantalum biomaterial with structure and mechanical properties similar to that of trabecular bone and with proven osteoconductivity. As much as 80 consecutive patients planned for ACDF were randomized for fusion with either TM or tricortical autograft from the iliac crest (SR) after discectomy and decompression. Digitized plain radiographic images of 78 (98%) patients were obtained preoperatively and at 2-year follow-up and were subsequently evaluated by two senior radiologists. Fusion/non-fusion was classified by visual evaluation of the A–P and lateral views in forced flexion/extension of the cervical spine and by measuring the mobility between the fused vertebrae. MRI of 20 TM cases at 2 years was successfully used to assess the decompression of the neural structures, but was not helpful in determining fusion/non-fusion. Pain intensity in the neck, arms and pelvis/hip were rated by patients on a visual analog scale (VAS) and neck function was rated using the Neck Disability Index (NDI) the day before surgery and 4, 12 and 24 months postoperatively. Follow-ups at 12 and 24 months were performed by an unbiased observer, when patients also assessed their global outcome. Fusion rate in the SR group was 92%, and in the TM group 69% (P < 0.05). The accuracy of the measurements was calculated to be 2.4°. Operating time was shorter for fusion with TM compared with autograft; mean times were 100 min (SD 18) and 123 min (SD 23), respectively (P = 0.001). The patients' global assessments of their neck and arm symptoms 2 years postoperatively for the TM group were rated as 79% much better or better after fusion with TM and 75% using autograft. Pain scores and NDI scores were significantly improved in both groups when compared with baseline at all follow-ups, except for neck pain at 1 year for the TM group. There was no statistically significant difference in clinical outcomes between fusion techniques or between patients who appeared radiologically fused or non-fused. There was no difference in pelvic/hip pain between patients operated on with or without autograft. In our study, Trabecular Metal showed a lower fusion rate than the Smith–Robinson technique with autograft after single-level anterior cervical fusion without plating. There was no difference in clinical outcomes between the groups. The operative time was shorter with Trabecular Metal implants.\n\nBODY.INTRODUCTION:\nAnterior surgery of the cervical disc with fusion using autograft from the iliac crest was introduced in the 1950s [10, 40]. The clinical results are typically satisfactory, with at least 75–80% of the patients satisfied, reporting reduced pain intensity, improved function and neurological restitution [9, 24, 27, 30, 32, 35]. However, infections, hematomas and longstanding pain [12, 25, 28, 33, 36, 37, 50] are frequently reported complications from the donor site. Allografts have been widely used, but imply risks of producing immunogenic response from the host, which might disturb fusion healing and involve the risk of transmitting infections [2, 5, 16, 26]. The risk is low, but can still be important if the infection is severe, as shown, e.g., when HIV is detected [7]. The ideal substitute for autograft should provide all three of its fundamental properties: osteogenicity, osteoconductivity and osteoinductivity. Several implants have been tested, but no ideal substitute or surgical method has been found [44]. Previously we have reported a low fusion rate for a cervical body interfusion with carbon fiber cage (Brantigan) [45]. Trabecular Metal (TM) is a porous tantalum biomaterial with structure and mechanical properties similar to trabecular bone (Fig. 1) and has been shown to be more osteoconductive than other commercially available biomaterials [4, 11, 22].Fig. 1Trabecular MetalTM implant The objectives of the study were to measure and compare the radiological and clinical outcomes of anterior cervical decompression and fusion (ACDF) with Trabecular Metal (TM) devices and the traditional Smith–Robinson (SR) procedure with autograft.\n\nBODY.METHODS:\nAll patients scheduled for single-level anterior cervical decompression and fusion (ACDF) and fulfilling the criteria for the study were consecutively invited to participate during the period February 2002 to September 2003. Five patients declined participation and the remaining 80 provided informed consent. Study inclusion criteria were cervical radiculopathy with or without myelopathy due to degenerative disc disease (including disc herniation and/or spondylosis) with compatible MRI and clinical findings. Exclusion criteria were previous cervical spine surgery, posttraumatic neck pain, inflammatory systemic disease, another neurological disease and drug or alcohol abuse. No patient had spontaneous fusion at the adjacent segments. A flowchart of the study is presented in Fig. 2.Fig. 2Flowchart for the study After discectomy, the subchondral bony end plates were roughened by the burr until they bled, taking care that they would be able to function as a bearing surface for the implant. The posterior longitudinal ligament was removed in the majority of cases, and consequently osteophytes, if present, were removed. After decompression had been completed, randomization to fusion group was performed in the theater by a nurse using sealed envelopes. This late randomization was used to avoid surgeons' bias to treatment group during as much as possible of the surgical procedure. An implant size that could be positioned between the end plates by light tapping was chosen. The tricortical autografts were taken from the iliac crest using a saw with a twin blade. A subcutaneous catheter was placed at the donor site for administration of ropivacaine hydrochloride (NaropR) 2 or 3 days postoperatively, to reduce the pain. All patients used a soft collar for 6 weeks postoperatively. The randomization procedure yielded similar group distributions of age, gender and smoking habits (Table 1). The operated segment was C3/4 in 2 patients (both SR), C4/5 in 4 patients (2 TM, 2 SR), C5/6 in 50 patients (26 TM, 24 SR), C6/7 in 23 patients (11 TM, 12 SR) and C7/T1 in 1 patient fused with TM. See Table 2 for implant size. The operations were performed by five senior surgeons, and 70 out of the 80 patients were operated on by one of the two authors (HL, LV).Table 1Patient dataTotalTMSRAge median (range) (years)49 (27–70)48 (38–59)49 (27–70)Sex, male (no.)502426Smokers (no.)261610Myelopathy* (no.)422* All had radiculopathyTable 2Trabecular Metal implant sizeSize H × D × W (mm)No. of patients5 × 11 × 1415 × 14 × 1416 × 11 × 1466 × 14 × 14127 × 11 × 1467 × 14 × 14118 × 14 × 143Total40\n\nBODY.METHODS.CLINICAL FOLLOW-UP:\nPain intensity in the neck, arms and pelvis/hip were rated by patients on a visual analog scale (VAS, 0–100), and neck function was rated using the Neck Disability Index [46] (NDI, 0–100) the day before surgery, and 4, 12 and 24 months postoperatively. Pain drawings were obtained at the same intervals. Follow-ups at 12 and 24 months were performed by an unbiased observer (ME), and patients also assessed their global outcomes at these same follow-up intervals.\n\nBODY.METHODS.RADIOLOGICAL FOLLOW-UP:\nDigitized plain radiographic images of 78 (98%) patients were obtained preoperatively, immediately postoperatively and at 2-year follow-up, and were subsequently evaluated by two senior radiologists. Consensus about fusion or non-fusion was reached after the first evaluation in 49 cases (62%) and after the second evaluation in the remaining 29 cases (38%). The second measurement was used to calculate intra-observer variability. The entire data set was analyzed to assess inter-observer variability of radiographic measurements and associated precision of the measurements. Fusion/non-fusion was classified by visual evaluation of the A–P and lateral views in forced flexion/extension of the cervical spine, i.e., (1) the presence/absence of bone-bridging or interface lucencies between TM and bone, and (2) by measuring the differences between the angles of the spinal processes of the fused vertebrae at flexion and extension. Fusion was classified as either clearly fused (I), probably fused (IIA), probably non-fused (IIB) and clearly non-fused (III) (Figs. 3, 4, 5). Finally, the material was dichotomized so that groups I and IIA were combined to fused, and groups IIB and III to non-fused. The same classification had been used in a previous study of the Brantigan carbon fiber cage [45]. For classification in group I, radiological signs of bone bridging were required and mobility of up to 1.0° was accepted. Cases classified in group II had uncertain signs of bone fusion. Group IIA had mobility of 2.0° or less and group IIB had more than 2.0°. Group III required both the absence of bone bridging and mobility of 3.0° or more.Fig. 3Clearly fused (group I) after surgery with autograft (SR)Fig. 4Clearly fused (group I) after surgery with Trabecular Metal (TM)Fig. 5Clearly non-fused (group III) after surgery with Trabecular Metal (TM). Note the radiolucent zone above the implant. Mobility is seen between images in flexion and extension MRI was performed on 20 consecutive TM cases at 2-year follow-up. Several parameter sets suggested for TM in the published literature were tested [21, 47] in addition to our standard protocol for the degenerative cervical spine, and the parameter sets below were ultimately chosen. The scans were performed on a Siemens Vision 1.5 T MRI scanner using a cervical spine coil with a protocol consisting of T1-sagittal images (TR 500, TE 12, se), T2/PD sagittal images (TR 4000, TE 128, tse and TR 1300, TE 120, se), PD sagittal images (TR 1300, TE 60, se), T1 axial images (TR 600, TE 15, se), T2 axial images (TR 620, TE 10, 25°, Fl2d), T2 oblique images (TR 4000, TE 128, tse), T2 coronal images (TR 1485, TE 120, se) and finally PD coronal images (TR 1485, TE 60, se). Slice thickness was 4 mm in all images.\n\nBODY.METHODS.RADIOLOGICAL FOLLOW-UP.STATISTICAL METHODS:\nA rank-invariant non-parametric method for analysis of pairs ordered categorical data was used to compare the pain ratings (VAS) and NDI for the groups. The method makes it possible to separately measure order-preserved individual changes attributable to the group change, as well as an individual change in category that is different from the change of pattern in the group [41, 42]. Clinically relevant improvement, set at 10, was calculated for VAS and NDI. χ2 tests were used to compare the groups. Student's t test was used to analyze the operative and hospital time. Fischer's exact test was used to compare the fusion rate in the groups. Inter- and intra-observer correlation was calculated using kappa analysis. A value of P < 0.05 was considered to be statistically significant.\n\nBODY.RESULTS.SURGERY:\nOperation times were shorter for fusion with TM as compared to autograft; mean times were 100 min (SD 18) and 123 min (SD 23), respectively (P = 0.001). There was no difference in intra-operative bleeding between the implant groups. Of the 80 patients, 72 had less than 50 ml of bleeding. There was no statistically significant difference in the hospital time between the groups; TM mean was 3.6 days (SD 1.1) and SR mean, 4.1 days (SD 1.7) (P = 0.18).\n\nBODY.RESULTS.CLINICAL OUTCOME:\nFor patients receiving TM, the maximal pain (VAS) was reduced from median 57 in the neck and 45 in the arm before surgery, to 40 and 14 at 1 year and to 41 and 24 at 2 years postoperatively. In the group with autografts, the corresponding VAS ratings were reduced from median 66 in the neck and 60 in the arm before surgery to 36 and 28 at 1 year, and to 24 and 28 after 2 years (Fig. 6a, b). The number of patients showing clinically relevant improvement (set to at least 10 for VAS) in neck pain was 39% with TM and 63% with SR (P = 0.07), and in arm pain 50 and 58%, respectively.Fig. 6Pain rating (VAS) and Neck Disability Index preoperatively and at the follow-ups. The box plots illustrate the 25th and 75th percentiles with the median value marked in between. The range is shown by whiskers, but extreme outliers are separately shown by circles. There were no statistically significant differences between the surgical methods NDI improved from median 36 preoperatively to 30 after 2 years in patients with TM, and from 44 to 25 in the SR group (Fig. 6c). Clinically relevant improvement in NDI (set to at least 10) was found in 53% of the patients with TM and 61% with SR. The patients' global assessment of their neck and arm symptoms 2 years postoperatively for the TM group were: 41% much better, 38% better, 10% unchanged, 8% worse, and (one patient) 3% much worse. In the SR group, the assessments were: 42% patients much better, 33% better, 13% unchanged and 12% worse, i.e., 79% were much better or better after fusion with TM and 75% using autograft. At all follow-ups of 4, 12 and 24 months, pain scores (VAS) in both neck and arm, and NDI scores were significantly improved in both groups when compared with baseline, except for neck pain (VAS) at 12 months in patients fused with TM (P = 0.06). No statistically significant difference was found between the Trabecular Metal and autograft techniques for pain scores, NDI or the patients' global assessments, and at all follow-up intervals. A trend toward a higher proportion of patients with clinically relevant improvement in neck pain (at least 10 mm VAS) was measured after 2 years in patients with autografts (P = 0.07). The clinical results and corresponding P values are presented in Table 3.Table 3The clinical results and corresponding P valuesTMSRP valueMedianq1–q3Medianq1–q3TM vs. SRTM vs. baselineSR vs. baselineMaximal neck pain (VAS)Preoperative5733–726629–774 months224–55173–500.40.0002<0.0011 year403–63363–710.60.060.0052 years415–74246–660.60.0480.002Maximal arm pain (VAS)Preoperative4518–776027–734 months62–2570–340.9<0.0001<0.0011 year142–62283–720.40.0060.0462 years240–72284–630.80.0340.008Maximal pelvic pain (VAS)Preoperative00–2120–54 months00–420–120.20.50.91 year00–420–120.20.30.62 years00–1500–80.80.50.8Neck Disability IndexPreoperative3625–474430–511 year288–44279–490.80.0050.0012 years3012–47258–440.80.0010.001Clinically relevant improvement inNeck pain (at least 10 for VAS)39%63%0.07Arm pain (at least 10 for VAS)50%58%0.6NDI (at least 10)53%60%0.6Patients' global assessment at 2 years (%)Much better4142Better3833Unchanged1013Worse812Much worse30Better/much better79750.5Bold values are P values < 0.05 No differences in clinical outcomes were seen between patients who appeared radiologically fused or non-fused (P = 0.6). There was a tendency toward poorer clinical outcome for smokers compared with non-smokers, estimated by the patients' global assessments (P = 0.07).\n\nBODY.RESULTS.PELVIC PAIN:\nThere was no difference in pelvic/hip pain (at the donor site) preoperatively and at 4, 12 or 24 months, between patients fused with and without autograft. Further analysis of the pain drawings showed eight patients with markings at the right iliac crest (four SR, four TM). However, the majority had marked this as related to the pain caused by lumbago/sciatica or generalized pain. Only one patient, who had been fused with TM, marked localized pain in this area. In summary, no remaining donor site pain was marked in the pain drawings, and none was seen in the VAS scoring.\n\nBODY.RESULTS.COMPLICATIONS.FURTHER SURGERY:\nThree patients were reoperated: two of them because of non-fusion and one due to graft dislocation. All had been primarily fused with autografts (SR). They were all clearly fused 2 years postoperatively. One patient fused with TM was operated on at the adjacent segment after 19 months. The only patient with remaining symptoms due to complications 2 years after surgery had a sensory deficit below the donor site at the iliac crest (SR). Further complications in the SR group included three patients with wound infections at the iliac crest, and one of them with an infected hematoma. One patient had pneumonia and one had a lower urinary tract infection (cystitis). All infections were cured after antibiotic treatment. One patient developed a fissure in the autograft during the primary surgery. A plate was added to the fixation directly, and the fusion healed without further complications. Among the patients fused with TM, two had transient hoarseness, and one of them also had swallowing disturbances. One patient was treated with antibiotics for a urinary tract infection. In summary, nine patients in the SR group and three patients in the TM group had complications, but only one patient (SR) had symptoms 2 years after surgery.\n\nBODY.RESULTS.RADIOLOGICAL OUTCOME:\nThe fusion rate shown by the radiological analysis is presented in Table 4. All patients in group III (clearly non-fused) showed at least 4° of mobility (the limit for the group set by the classification was 3.0°). There was no statistically significant difference in the fusion rate between smokers (92%) and non-smokers (74%) (P = 0.2). Smokers operated on with TM showed an 87% fusion rate. Kappa analysis showed 0.63 and 0.66 for the intra-observer correlation, and 0.58 for inter-observer correlation.Table 4Radiological fusionFusion groupTMSRNo.PercentageNo.PercentageI (clearly fused)18463589IIA (probably fused)92313IIB (probably non-fused)51300III (clearly non-fused)71838Fusion rate (I + IIA)6992\n\nBODY.RESULTS.MAGNETIC RESONANCE IMAGING:\nMRI, of 20 TM cases was successfully used to assess decompression of the neural structures, but was not helpful in determining fusion/non-fusion attributable to metal artifacts in the area immediately surrounding the implants.\n\nBODY.DISCUSSION:\nAutograft is referred to as the gold standard for spinal fusion [43] due to its unique combination of osteogenicity, osteoconductivity and osteoinductivity. We had hypothesized that similarly high fusion rates for TM as for autograft could be obtained, but without the risk of complications from the donor site associated with autograft. The fusion rate of carbon fiber cages used in the treatment for the degenerative cervical spine was 62% in our previous study of the Brantigan cage [45], which led us to discontinue the use of the device. TM cages were chosen for the present study because of the unique microstructure of the material and because of the published affinity of osteocytes to tantalum metal [22]. These factors were hypothesized to promote bone ingrowth and enhance fusion. The fusion rate for TM in the present study was higher than for the carbon fiber cages, but lower than that of the SR group. In a recent study, Fernández-Fairen et al. [14] compared TM used as a stand-alone cage with autograft used with plate. The fusion rate was 89% for TM and 85% for autograft with plate. No statistically significant difference in radiological fusion or in clinical outcome was found between the groups. Criteria for fusion were that \"segments were deemed fused when there was evidence of bony bridging around the implant and/or <2° of variation of Cobb's angle on F/E radiographs or <2 mm of variation in the interspinous distance, in the absence of periimplant radiolucency\". We had similar criteria for fusion, besides measuring the movement between the spinal processes. The criterion accepting <2 mm movement is probably wider compared to the criterion of <2° variation of the angle [8, 13]. With less stringent criteria for fusion (more motion allowed), the apparent fusion rates increase, as demonstrated by Fasset [13]. Smokers were excluded from the study by Fernández-Fairen et al. [14], while we had 40% smokers in the group fused with TM. We found no statistically significant difference between smokers and non-smokers in our study, but it is still possible that smoking had some adverse influence on the fusion healing. On the other hand, the fusion rate in our control group with autograft was 92%, while it was 85% after autograft with plate in the study by Fernandez et al., though 25% of our patients operated on with autograft were smokers. Our patients were not randomized to fusion group until the major part of the surgical procedure including the decompression was completed, which reduced the risk for surgeons' bias, while the preoperative randomization in the study by Fernandez et al. might have had an adverse influence on the control group. Wigfield et al. [49] have presented a study with tantalum interbody implant, where inclusion of patients was halted after radiographs 6 weeks postoperatively had shown inferior end-plate lucency, raising concerns about delayed fusion or non-fusion. However, fusion was subsequently noted in all 15 patients available for follow-up at 12 months of the 17 patients operated on with tantalum implant, but the study numbers were too small for statistical analysis. Fusion was defined as less than 4° angulation between flexion and extension radiographs and absence of radiolucency extending over more than 50% of the implant/end plate interface. Baskin and Travnelis [3] compared TM with autograft in an RCT that was terminated due to concerns over delayed fusion after 39 patients had been enrolled. Of the 28 patients operated on with TM, 6 out of 16 patients (37%) who were examined with radiographs at 24 months were fused. A low fusion rate with TM was found by Zoëga and Lind [51] as well. Two years after ACDF with TM cage, none of the 13 patients had fusion. Those authors used radiostereometric analysis (RSA) for the follow-up, which is a very sensitive method for detecting motion [17, 29, 53]. Clinical outcome data showed no statistical difference between non-fused and fused patients in the present study. Earlier studies have shown divergent results concerning correlation between fusion and clinical outcome, with some pointing to the importance of the fusion for the clinical outcome [9, 27, 48] and others denying such a connection [12, 25, 31]. Addressing the fusion rate alone (without considering the clinical outcome), the use of TM as stand-alone device does not seem sufficient. The fusion rate with TM might be enhanced with an anterior plate, considering published results of TM with and without pedicle screws used in the porcine lumbar spine [54], as well as fusion rates for TM with allograft and anterior plate [35]. The use of an anterior plate in these studies suggests that initial stability may be an important factor in achieving fusion [36]. Because of the results obtained, we now use TM together with an anterior plate. Because our earlier study of the Brantigan cage showed closer correlation between radiological fusion and clinical outcome 5 years postoperatively as compared with 2 years [27], the present study will be extended. It has been advocated that a fusion cage can avoid subsidence better than an autograft, due to collapse of the latter. Some studies support this [45], while others show similar subsidence with the cage as well [15, 23]. This question was not addressed in the present study, where the radiological evaluation focused on whether the operated segment was fused or not. The accuracy of measurements of motion on digitized radiograph images was considerably higher in the present study (2.4°, 95% CI) than in our previous experience measuring on conventional radiographic films. The accuracy of measurements on conventional radiograph images has previously been estimated at 5° [17] and the cutoff for mobility has been set at 4° for studies of cervical implants. The described method using digitized radiograph images has reduced the difference in accuracy compared to the much more complex and expensive radiostereometry (RSA). We estimated the accuracy for RSA in the cervical spine at about 1° in a previous study [25, 29], which is less accurate than in the lumbar spine, mainly owing to the small size of the cervical vertebral bodies. With distortion-compensated roentgen analysis (DCRA), another technique for computerized analysis of conventional radiographs, Leivseth found an error of 2.4° [20]. MRI of 20 TM cases was successfully used to assess decompression of the neural structures, but was not helpful in determining fusion/non-fusion. The artifacts from the implants could be limited to the structures immediately surrounding the implants. Hence, the spinal canal and the foraminae could be visualized, and the decompression assessed, but interpretation of the interface between implant and vertebral body was disturbed. This is in contrast to the experience in the lumbar spine [personal communication, D Robertson] and is mainly due to the smaller size of the cervical vertebrae. The primary advantage of using an implant rather than autograft bone for ACDF is that it avoids complications associated with the donor site. Several previous studies have reported persistent pain in 15–40% of the patients 2 years after surgery [6, 12, 25, 33, 37, 50], though some studies show that this is less frequent [1, 34]. In our earlier study of the Brantigan cage, we found more donor site pain immediately postoperatively when using a conventional graft from the iliac crest as compared with using a percutaneous technique [45]. In the present study, no residual donor site pain was found at 4 months or later after surgery. In the early postoperative period, donor site pain is frequent, and it was experienced by our patients, but no assessment of the pain was made in this period. The administration of ropivacaine hydrochloride (NaropR) subcutaneously for the first few postoperative days resulted in pain reduction in our patients. This postoperative pain reduction might have reduced the tendency to persisting pain as well, due to less central sensitization caused by the postoperative pain [18, 19]. Sing et al. [38] have shown the good effects of continuous local anesthetic infusion on the acute graft-related pain as well as a remaining effect 4 years postoperatively [39]. In both groups in our study, 10% of the patients marked the pelvic/hip region on the pain drawing at 2 years, which illustrates the importance of having a control group for all follow-ups. Patients operated on with autografts were at risk of rare complications, such as neuralgia, although this did not occur in the moderate number of studied patients. Despite absence of chronic pain, we still found donor site morbidity; one patient had lasting sensory disturbance and three were treated for local infections. The clinical outcome in the present study showed 28 and 22 mm reduction in pain rating (VAS) in the neck and arm, respectively, 12 points improvement in NDI, and improvement for 77% of the patients in the global assessment. This is in accordance with earlier studies [9, 24, 25, 30, 32, 35, 45, 52].\n\nBODY.CONCLUSIONS:\nThis study of uninstrumented single-level ACDF showed a lower fusion rate with Trabecular Metal than with the Smith–Robinson technique with autograft after single-level anterior cervical fusion without plating. There were no differences in the clinical outcomes between the groups, and there were no differences in outcomes between patients who appeared radiologically fused or non-fused. The operating time was shorter with Trabecular Metal implants. No remaining donor site pain at the iliac crest was seen at 4 months or later.\n\n**Question:** Compared to anterior cervical decompression and fusion (ACDF) with Trabecular MetalTM (TM) what was the result of raditional Smith–Robinson (SR) procedure with autograft on Fusion rate?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
224
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A heart failure self-management program for patients of all literacy levels: A randomized, controlled trial [ISRCTN11535170]\n\n ABSTRACT.BACKGROUND:\nSelf-management programs for patients with heart failure can reduce hospitalizations and mortality. However, no programs have analyzed their usefulness for patients with low literacy. We compared the efficacy of a heart failure self-management program designed for patients with low literacy versus usual care.\n\nABSTRACT.METHODS:\nWe performed a 12-month randomized controlled trial. From November 2001 to April 2003, we enrolled participants aged 30–80, who had heart failure and took furosemide. Intervention patients received education on self-care emphasizing daily weight measurement, diuretic dose self-adjustment, and symptom recognition and response. Picture-based educational materials, a digital scale, and scheduled telephone follow-up were provided to reinforce adherence. Control patients received a generic heart failure brochure and usual care. Primary outcomes were combined hospitalization or death, and heart failure-related quality of life.\n\nABSTRACT.RESULTS:\n123 patients (64 control, 59 intervention) participated; 41% had inadequate literacy. Patients in the intervention group had a lower rate of hospitalization or death (crude incidence rate ratio (IRR) = 0.69; CI 0.4, 1.2; adjusted IRR = 0.53; CI 0.32, 0.89). This difference was larger for patients with low literacy (IRR = 0.39; CI 0.16, 0.91) than for higher literacy (IRR = 0.56; CI 0.3, 1.04), but the interaction was not statistically significant. At 12 months, more patients in the intervention group reported monitoring weights daily (79% vs. 29%, p < 0.0001). After adjusting for baseline demographic and treatment differences, we found no difference in heart failure-related quality of life at 12 months (difference = -2; CI -5, +9).\n\nABSTRACT.CONCLUSION:\nA primary care-based heart failure self-management program designed for patients with low literacy reduces the risk of hospitalizations or death.\n\nBODY.BACKGROUND:\nLimited literacy skills are common among adults in the United States [1]. Low literacy is associated with increased risk of hospitalization and worse control of chronic diseases [1-4]. Heart failure is a common chronic illness requiring multiple medications and significant self-care. Heart failure is the leading cause of hospitalization in the Medicare population [5]. The complexity of care for heart failure puts people with low literacy at considerable risk for adverse outcomes including hospitalization, worse quality of life, and mortality. Heart failure disease-management interventions appear effective in reducing rehospitalizations and improving quality of life [6]. Most randomized clinical trials of heart failure disease management completed over the last 10 years have enrolled patients during, or shortly after, hospitalization and reported the outcome of readmission [6]. Although the designs of these programs vary, several have tested education and support to enhance patient self-management as the main component of the intervention [7-10]. The content of self-management education usually includes teaching to understand medications, reduce salt intake, monitor daily weights, and recognize symptoms. Most programs include structured follow-up either by home visit, phone, or mail. Only a few, uncontrolled studies specifically ask patients to self-adjust their diuretics [11,12]. Heart failure self-management programs may be particularly effective for vulnerable populations, such as those with poor literacy [13,14]. However, to our knowledge, no previous studies have explicitly examined the role of self-management programs in a low literacy population. A recently published study and accompanying editorial suggested that such self-management support may be most effective among vulnerable populations [13,14]. Low literacy may represent a vulnerability for which we should design our programs. Disease management for patients with low literacy may require refined approaches to foster self-management skills. We developed a heart failure self-management program for use by patients with a variety of literacy levels [15]. We performed a randomized controlled trial comparing our self-management program to usual care among outpatients to test if the program could reduce hospitalizations and improve heart failure-related quality of life.\n\nBODY.METHODS.STUDY DESIGN:\nWe conducted a randomized controlled trial in the University of North Carolina (UNC) General Internal Medicine Practice, which serves a wide socioeconomic range of patients. The practice, staffed by over 20 attending faculty and 70 medical residents, cares for over 500 patients with heart failure.\n\nBODY.METHODS.STUDY PARTICIPANTS:\nTo be eligible, patients had to have a clinical diagnosis of heart failure confirmed by their primary provider through a direct interview, and one of the following: 1) chest x-ray findings consistent with heart failure, 2) ejection fraction <40% by any method, or 3) a history of peripheral edema. They also had to have New York Heart Association class II-IV symptoms within the last 3 months. Patients were excluded if they had moderate to severe dementia (based on the treating physician's clinical judgment), terminal illness with life expectancy less than 6 months, severe hearing impairment, blindness, current substance abuse, a serum creatinine >4 mg/dl or on dialysis, a requirement of supplemental oxygen at home, lacked a telephone, or were scheduled to undergo cardiac surgery or awaiting heart transplant. We did not exclude patients on the basis of literacy skill because we felt patients of all literacy levels would benefit from this intervention. Patients who read well often prefer information presented in an easy-to-read format [16]. We accepted referrals from the cardiology clinic if patients met eligibility criteria. This study was approved by the Institutional Review Board of the UNC School of Medicine, and all participants gave informed consent prior to enrollment. For participants who could not adequately read the informed consent document, the research assistant read and explained it to them. They were asked to sign a short form indicating that the informed consent document was reviewed and they agreed to participate. When the short form was used, a witness was asked to attest to the adequacy of the consent process.\n\nBODY.METHODS.PROCEDURES:\nParticipants were recruited between November 2001 and April 2003 from the General Internal Medicine and Cardiology Practices at UNC Hospitals. A trained research assistant screened all patients age 30–80 for use of furosemide. If the patient was on furosemide, their physician was queried about the presence of heart failure. If the patient met eligibility criteria and consented to participate, baseline data were collected. We then randomized patients by concealed allocation based on a random number generator to receive the intervention or usual care. All patients were followed for one year. All data were collected in the General Internal Medicine Practice.\n\nBODY.METHODS.INTERVENTION:\nThe intervention was delivered in the General Internal Medicine Practice. The educational materials and disease management intervention were previously described in detail, and the intervention is summarized here [15]. The intervention began with a 1-hour educational session with a clinical pharmacist or health educator during a regular clinic visit. Patients were given an educational booklet designed for low literacy patients (written below the 6th grade level and extensively pre-tested in focus groups and a pilot study [15]) and a digital scale. The educator and patient reviewed the booklet together, including management scenarios. As part of the educational session, patients were taught to identify signs of heart failure exacerbation, perform daily weight assessment, and adjust their diuretic dose. Because this intervention was aimed at patients with low literacy, the health educator used pedagogic strategies felt to improve comprehension for patients with low literacy [17]. For example, the educator had the patient teach back the information [18], engaged the patient in filling out the notebook, and used brainstorming to help the patient incorporate self-management into their lives. The educator, patient, and primary care physician collaborated to establish the patient's \"good weight\" (i.e., where the patient's heart failure was stable) and baseline diuretic dose. The educator then filled in the management plan in the patient's notebook to help the patient better manage weight fluctuations and self-adjust the diuretic dose based on weight (Figure 1). The general plan involved doubling the dosage if weight went up and halving it if weight went down. The program coordinator then made scheduled follow-up phone calls (days 3, 7, 14, 21, 28, 56) and monthly during months 3–6. The follow-up phone calls, each lasting 5–15 minutes, were designed to reinforce the educational session and provide motivation for the patients. Again, the program coordinator had the patient describe their self-management practices and offered feedback to improve them. Patients experiencing worsening symptoms were scheduled acute visits with their physician. We did not provide specialized nursing assessment, care or medication advice beyond diuretic dosing. If the patient's doctor determined that the good weight had changed, the program coordinator would revise the care plan with the patient. Patients enrolled in the control group received a general heart failure education pamphlet written at approximately the 7th grade level, and continued with usual care from their primary physician. The only contacts between the research team and the control patients were at enrollment and data collection.\n\nBODY.METHODS.MEASURES:\nWe assessed outcomes at 6 and 12 months through in-person interviews and review of the medical record. To be sensitive to low literacy, all interviews were conducted verbally by a trained research assistant. If patients were unable to come to clinic for the interview, it was conducted by phone. The research assistant was not blinded to the patient's study group. Primary outcomes were death or all-cause readmission and heart failure-related quality of life at the end of 12 months. Data on hospitalization dates were obtained from the patient and confirmed by medical chart review. All admissions, regardless of hospital or cause, were counted. For exploratory analyses, we classified reason for admission as cardiac or non-cardiac. Cardiac admissions included those primarily for heart failure (e.g., shortness of breath and edema relieved by diuresis) and other cardiac causes such as chest pain, arrhythmias, or syncope. Cause of admission was determined by chart review by one of the authors (D.D.) who was blinded to treatment allocation. Heart failure-related quality of life was assessed using a modified version of the Minnesota Living with Heart Failure Questionnaire (MLHF). The MLHF is a 21 question instrument with a 6-point Likert scale response option and scores ranging from 0 to 105 [19]. In pilot testing of the MLHF, we found that low literacy patients had trouble answering questions with the standard 6-point Likert scale [15], so we changed the response scale to 4 points, using a visual display with stars to represent increasing severity. The 4-point Likert scale was scored as 0, 1, 3, and 5 to approximate standard scores on the MLHF. Secondary measures included heart failure self-efficacy, knowledge, and behaviors. Self-efficacy was measured with an 8 item scale developed for the behaviors needed in this trial as suggested by self-efficacy scale developers [20]. Respondents used a 4-point Likert scale yielding a total score from 0–24. We assessed heart failure knowledge using a knowledge test previously developed for this population [15], Heart failure self-management behavior was assessed by asking patients how often they weighed themselves. We used patient self-report and the medical chart to measure age, gender, race, insurance status, income, years of education, medication use, years with heart failure, and the presence of co-morbidities. We measured literacy using the Short Test of Functional Health Literacy in Adults (S-TOFHLA) [21], a well-validated scale that correlates well with other measures of reading ability [22]. Patients who scored in the inadequate literacy range on the S-TOFHLA were considered to have \"low literacy.\" The cut-point for inadequate literacy is roughly analogous to the 4th grade reading level.\n\nBODY.METHODS.SAMPLE SIZE:\nSample size was based on pilot study results showing a 9-point improvement in MLHF scores over 3-months with the intervention [15]. Detecting a 9-point difference between intervention and control group with 80% power and alpha set at 0.05 required 70 patients per group. We aimed to enroll 150 patients to account for possible attrition, but stopped at 127 because funding ended. We did not power this study to detect differences in hospitalization, but studies with even smaller numbers of patients have shown a difference for that outcome [7].\n\nBODY.METHODS.STATISTICAL METHODS:\nPatients who did not return any phone calls and did not return for follow-up assessment did not have outcome data for analysis. Patients who withdrew from the study were censored at the time of withdrawal; any data collected prior to withdrawal were included in the analysis. Baseline differences between groups were assessed using t-tests for continuous outcomes and chi-squared tests for categorical outcomes. For MLHF, heart failure self-efficacy and heart failure knowledge, we used two-sample t-tests. Non-parametric tests were also performed for all comparisons, but results did not differ from the parametric tests. Because of the small sample size and the unequal distribution of baseline characteristics, we adjusted for baseline differences using linear regression. Analyses of self-reported outcomes, such as MLHF, excluded patients who died or withdrew from the study before 6 or 12 month data was collected. For hospitalization or death, we used negative binomial regression and censored patients at the time of death or withdrawal from the study. Based on the likelihood ratio test, negative binomial regression was a better fit for the data than a Poisson regression. Additionally, the Vuong test confirmed that a zero-inflated model was inappropriate [23]. Because of uneven distribution of baseline characteristics, we performed analysis of covariance with negative binomial regression to control for baseline differences [24]. We identified the following variables that could contribute to hospitalization or death based on previous studies: age, race, gender, literacy level, hypertension, diabetes, creatinine, MLHF score, use of β-blockers, angiotensin converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), use of digoxin, and presence of systolic dysfunction [7,25]. Variables were not included in the model if the relationship between the variable and the study group or outcome had a p value greater than 0.3. We started with a model including the following items to arrive at the best point estimate: age, gender, hypertension, creatinine, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs. We then eliminated variables from the model if p > 0.30, and if the point estimate remained within 10% of the initial estimate. We prespecified a subgroup analysis in patients with low literacy to analyze if the intervention had a similar effect. The same analysis described above was repeated for the subgroups of patients with low literacy and those with higher literacy. The initial multivariate model for the subgroups analysis included: age, gender, hypertension, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs.\n\nBODY.METHODS.ROLE OF THE FUNDING SOURCE:\nThe funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.\n\nBODY.RESULTS.PATIENTS:\nWe screened 919 patients on furosemide between November 2001 and April 2003. 127 met eligibility criteria and agreed to participate (Figure 2). Of those not enrolled, 407 did not have heart failure according to their physician, 367 did not meet eligibility criteria and 27 declined to participate. Of those randomized to the control group, 1 never returned after the first visit, 1 withdrew during the study and 4 died during the study. Follow-up was completed for all of the remaining participants (98%) (Figure 3). Of those randomized to the intervention group, 3 never returned after the first visit, 4 withdrew during the study and 3 died during the study. Follow-up was completed for all of the remaining participants (93%). At baseline, most characteristics were similar between the two groups (Table 1). However, the control group had more participants with hypertension, fewer with diabetes, and fewer men. Of heart failure related characteristics, the control group had more participants with systolic dysfunction, and taking β-blockers. The intervention group had more participants taking ACE inhibitors or ARBs, and digoxin. Regardless of these differences, none were statistically significant. The control group did have statistically significantly higher baseline MLHF scores representing worse symptoms at baseline.\n\nBODY.RESULTS.HOSPITALIZATION OR DEATH:\nThere were 68 hospitalizations (65) or deaths (3) in the intervention group and 111 (107 hospitalizations, 4 deaths) in the control group. The crude all-cause hospital admission or death incidence rate ratio (IRR) was 0.69 (95% CI 0.40, 1.19). After adjusting for age, gender, use of ACE inhibitor or ARB, use of a β-blocker, presence of hypertension, and baseline MLHF, intervention patients were less likely to have the outcome (IRR = 0.53; 95% CI 0.32, 0.89). 61% of patients in the control group had at least one hospitalization or died, and 42% of patients in the intervention group had at least 1 hospitalization or died (p = 0.13).\n\nBODY.RESULTS.CARDIAC HOSPITALIZATION:\n39% of patients in the control group and 34% of patients in the intervention group had at least one hospitalization for cardiac causes (p = 0.55). The unadjusted IRR was 0.79 (95% CI 0.42, 1.5). After adjusting for baseline differences, the IRR was 0.85 (95% CI 0.44, 1.7).\n\nBODY.RESULTS.HEART FAILURE-RELATED QUALITY OF LIFE:\nIn unadjusted analysis, the control group, on average, improved 5 points on the MLHF and the intervention group improved by 1 point. The difference was not statistically significant (3.5 points, 95% CI 11, -4, p = 0.36). After adjusting for baseline differences between the groups, the difference was 2 points (95% CI 9, -5, p = 0.59) suggesting no effect on heart failure-related quality of life.\n\nBODY.RESULTS.OTHER OUTCOMES.KNOWLEDGE:\nHeart failure related knowledge improved more in the intervention group than in the control group. Mean difference in score improvement was 12 percentage points (95% CI 6, 18; p < 0.001).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-EFFICACY:\nHeart failure self-efficacy improved more in the intervention group than in the control group. Mean difference in score improvement was 2 points (95% CI 0.7, 3.1; p = 0.0026).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-CARE BEHAVIOR:\nSignificantly more patients in the intervention group than in the control group reported daily weight measurement at 12 months (79% vs. 29%, p < 0.001).\n\nBODY.RESULTS.SUBGROUP ANALYSES ACCORDING TO LITERACY:\nTwenty-four patients in each group had inadequate literacy based on the S-TOFHLA (Table 2). Among these patients, there was no difference in quality of life score in unadjusted and adjusted analyses (difference = -1.6; 95% CI -15, 12); p = 0.81). For the combined outcome of hospitalization or death, the unadjusted IRR was 0.77 (95% CI 0.30, 1.94). After adjusting for baseline differences, the IRR was 0.39 (95% CI 0.16, 0.91). Seventy-five patients had marginal or adequate literacy based on the S-TOFHLA. We found no difference in quality of life score in unadjusted and adjusted analyses (difference = -4.2; 95% CI -14, 6; p = 0.40). Among patients in the higher literacy group, the unadjusted IRR for hospitalization or death was 0.65 (95% CI 0.33, 1.3). After adjusting for baseline differences, the IRR was 0.56 (95% CI 0.30, 1.04). We did not find a statistically significant effect modification between literacy and the intervention.\n\nBODY.DISCUSSION:\nA heart failure self-management program designed for patients with low literacy reduced the rate of the combined endpoint of hospitalization or death. The prespecified subgroup analyses suggest that patients with low literacy benefited as much from the intervention as the patients with higher literacy. The success of our intervention reflects the goals of our program. We designed an easy-to-read and use educational booklet and self-management plan, and focused on overcoming barriers to learning self-management [15]. Our intervention was founded on teaching self-management. We focused on helping patients understand signs and symptoms of worsening heart failure and perform self-adjustment of diuretics based on weight fluctuation. Many care providers would not attempt to teach patients, particularly those with low literacy, how to self-adjust their diuretic medication. We found that, with careful teaching, many patients incorporated this strategy into their daily routine successfully, as demonstrated by improved self-care behaviors. Teaching self-adjustment of diuretics, rather than the conventional teaching to call the care provider if weight fluctuates, empowers patients to take more control over their illness. Self-adjustment of diuretic dose is a prominent aspect of the self-management training we provided to the intervention patients. Other programs to improve patient self-management have not been explicit in teaching patients to self-adjust their diuretic dose based on weight fluctuation. Although our outcomes are comparable to others', using this approach puts more control into the hands of the patient. Furthermore, our intervention appears effective among patients with low literacy skills, a group often overlooked for empowering interventions. Our study adds to the growing literature on disease management programs for patients with heart failure [6], particularly those that focus on self-management training [7-10]. Studies focusing on self-management training have demonstrated comparable improvements in hospitalization rates to more comprehensive programs that aim to improve the quality of pharmaceutical prescribing, provide home visits, and take place in specialized cardiology clinics [6]. Such comprehensive programs have also been shown to reduce mortality, but self-management programs have not [6]. We did not detect any difference in heart failure related quality of life which was the outcome we powered our study to detect. Other self-management studies that have found improved quality of life have enrolled patients during a heart failure hospitalization [8,9]; however, we enrolled patients in the outpatient setting while they were clinically stable. Improving quality of life for stable outpatients may be more difficult for this type of intervention. We have previously reported that patients with diabetes and low literacy benefited more from a disease management intervention than those with higher literacy skills [26]. A similar result in two different chronic diseases substantiates the claim that appropriately designed disease management programs may have greater effect for low literacy or vulnerable populations, who are most at risk for adverse outcomes with usual care. This finding is particularly important in light of the recent study by DeBusk and colleagues that did not find a reduction in hospitalization with a well-designed comprehensive intervention [13]. The authors and an accompanying editorial [14] suggested that the failure to detect improvement may have occurred because the patients studied were less at-risk than in other studies. They called for more research to determine better ways of targeting disease management. We believe that low literacy is an important marker for vulnerability to adverse outcomes, and that disease management programs targeted to patients with low literacy may be an effective way of focusing resources on those most able to benefit. If patients with low literacy are to be preferentially recruited for such programs, innovative outreach and screening efforts will likely be required, as patients with low literacy may face particular barriers to accessing such care. This study should be interpreted in light of its limitations. Research assistants were not blind to group assignment during the assessment of self-reported outcomes. As such, patients in the intervention may have been more likely to inflate their responses in an effort to please the interviewer. This effect would tend to inflate patient responses to the subjective assessments of heart failure-related quality of life, self-efficacy, and self-care behaviors. The MLHF questionnaire was modified from its original form to make it easier for patients with low literacy to respond. This change in the scale may have changed its ability to detect important changes in heart failure related quality of life. Because the groups' mean scores were almost identical, we do not feel this limitation changed our results. In a similar vein, most questionnaires are not validated in low literacy populations, raising questions as to their ability to perform to the same standards. Our sample size was small, which did not allow for an even distribution of baseline variables among the groups. We controlled for baseline differences between groups in our analysis. While it is controversial whether or not to control for baseline differences in randomized controlled trials, some analysts have argued that doing so improves the power without introducing bias [24]. A larger, multi-site study would offer better control of confounders, better generalizability, and more power to determine differences in effect according to literacy. We did not collect data on the resources needed to implement this type of intervention in usual settings, and such a study and cost-effectiveness analysis would be helpful for most interventions of this type. We used health educators, not nurses or physicians, to deliver the intervention. By using less highly trained individuals to deliver the intervention, we enabled nurses and physicians to perform clinical tasks more commensurate with their training. Other studies that have performed global cost estimates have found that the savings from reductions in hospitalizations exceed the cost of the intervention [6].\n\nBODY.CONCLUSION:\nIn conclusion, our heart failure self-management program, designed for patients of all literacy levels, appears to reduce rates of hospitalization and death. Patients with low literacy, and other vulnerable patients, may stand to benefit most from these programs. Further research into the design, implementation, and dissemination of disease management programs for low literacy patients will be crucial for meeting the health care needs of the growing population of patients with chronic illness.\n\nBODY.COMPETING INTERESTS:\nDrs. DeWalt and Pignone have received honoraria and grants from Pfizer, Inc., Dr. Rothman has received grants from Pfizer, Inc., and Dr. Sueta is in their speakers bureau.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nDD conceived of the study, participated in its design and coordination, performed statistical analyses, interpretation of the data, and drafted the manuscript. RM, MB conceived of the study and participated in its coordination. MK, KC coordinated the study, and collected the data, RR, CS participated in study design and interpretation of the data. MP conceived of the study, participated in its design and coordination, and interpretation of the data. All authors reviewed the manuscript for important intellectual content and gave final approval.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to Standard information about self-care what was the result of Follow-up and thorough education on self-care on Minnesota living with heart failure (MLHF) questionnaire baseline scores?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
374
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Formoterol DPI 9 μg what was the result of Budesonide/formoterol pMDI 160/9 μg on Sleep score improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
319
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 320/9 μg on Peak expiratory flow in the morning and evening ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
440
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomized trial to monitor the efficacy and effectiveness by QT-NASBA of artemether-lumefantrine versus dihydroartemisinin-piperaquine for treatment and transmission control of uncomplicated \n\n ABSTRACT.BACKGROUND:\nMany countries have implemented artemisinin-based combination therapy (ACT) for the first-line treatment of malaria. Although many studies have been performed on efficacy and tolerability of the combination arthemeter-lumefantrine (AL) or dihydroartemisinin-piperaquine (DP), less is known of the effect of these drugs on gametocyte development, which is an important issue in malaria control.\n\nABSTRACT.METHODS AND RESULTS:\nIn this two-arm randomized controlled trial, 146 children were treated with either AL or DP. Both groups received directly observed therapy and were followed for 28 days after treatment. Blood samples were analysed with microscopy and NASBA. In comparison with microscopy NASBA detected much more gametocyte positive individuals. Moreover, NASBA showed a significant difference in gametocyte clearance in favour of AL compared to DP. The decline of parasitaemia was slower and persistence or development of gametocytes was significantly higher and longer at day 3, 7 and 14 in the DP group but after 28 days no difference could be observed between both treatment arms.\n\nABSTRACT.CONCLUSION:\nAlthough practical considerations could favour the use of one drug over another, the effect on gametocytogenesis should also be taken into account and studied further using molecular tools like NASBA. This also applies when a new drug is introduced.\n\nABSTRACT.TRIAL REGISTRATION:\nCurrent controlled trials ISRCTN36463274\n\nBODY.BACKGROUND:\nIn response to widespread resistance of Plasmodium falciparum parasites to the commonly used drugs chloroquine (CQ) and sulphadoxine-pyrimethamine (SP), many African countries recently adopted artemisinin-based combination therapy (ACT) as first-line treatment for uncomplicated malaria. The combination artemether-lumefantrine (AL) proved to be highly effective and well-tolerated in several studies in Africa [1-5]. Disadvantages of this drug combination are the twice-daily dosing and the fact that it should be administered with a fat-rich meal [6-8] or at least a cup of soya milk [2]. In Uganda, in an area of intense malaria transmission, recurrence of parasitaemia within 28 days occurred in 29% of AL treated patients, in 8.9% adjusted by genotyping, indicating recrudescence [4]. Another ACT, dihydroartemisinin combined with piperaquine (DP), which was originally developed in China, is increasingly used in Southeast Asia [9,10]. Piperaquine is an orally active bisquinoline with a half-life elimination time of 2.5–4 weeks [9]. The drug is structurally related to CQ, but still active against highly CQ-resistant P. falciparum strains [1,11]. This relatively inexpensive drug was well tolerated and highly effective in Southeast Asia [1,7,11-13] as was the case in two studies in Africa [4,14]. Consequently, both AL and DP are considered to be amongst the most promising artimisinin-based drugs [4,14]. Artemisinin-based drugs also act on gametocytes and thus on transmission, at least in low transmission areas [15-17]. In high transmission areas of Africa, not much information is yet available on gametocytaemia after ACT treatments and on possible influence on transmission. In a comparative study of AL and DP in Uganda, the appearance of gametocytes in those who did not have gametocytes at the start of treatment was lower from day 15 to day 42 of follow up in those treated with DP than in those treated with AL [4]. A limitation of this study was the fact that gametocytaemia was assessed by microscopic examination only. It has recently been shown that sub-microscopic gametocyte densities may significantly contribute to the transmission of malaria [18]. Adequate assessment of gametocytaemia is important. Quantitative nucleic sequence based amplification technique (QT-NASBA) has been shown to be much more sensitive for the detection of gametocytes than microscopy [3,18-20]. In this study, the post-treatment prevalence of gametocytes in children in Mbita, western Kenya was assessed, after treatment with AL and DP. Microscopy and QT-NASBA for the quantification of gametocytaemia were compared and effectiveness of both drugs regarding clinical symptoms, clearance of parasites and tolerability was assessed.\n\nBODY.MATERIALS AND METHODS.STUDY SITE AND POPULATION:\nThe study was conducted in Mbita, western Kenya, at the shores of Lake Victoria during the high malaria transmission season of April-July 2007. Mbita, is an area with highly variable transmission that depends on the local environmental circumstances that can support mosquito conditions. The EIR is calculated to be 6 infectious bites per person per month [21]. Children (6 months-12 years of age) visiting the out-patient clinic of the health centre and diagnosed with uncomplicated malaria were included after informed consent from parents or guardians. Inclusion criteria were: uncomplicated P. falciparum malaria with initial parasitaemia between 1,000 and < 200,000 parasites/μl blood, axillary temperature ≥ 37.5°C (measured with a digital thermometer) or a history of fever. Children with severe malaria, mixed infection or other underlying illness were excluded from the study. In total 146 children were recruited for the present study. Ethical approval for this study was obtained from appropriate local authorities and The Kenya Medical Research Institute (KEMRI, Nairobi, Kenya) Ethical Steering Committee (SSC protocol No 948). The trial was registered as an International Standard Randomized Controlled Trial at current controlled trials (ISRCTN36463274).\n\nBODY.MATERIALS AND METHODS.STUDY DESIGN AND TREATMENT:\nFollowing diagnosis (at day 0), the patients were randomly allocated to one of the two treatment groups following a computer generated randomization list. One group was assigned DP (Sigma-Tau, Italy) once per day for three days. One tablet of the study drug contained 20 mg of dihydroartemisinin and 160 mg of piperaquine (paediatric formulation). Treatment was according to body weight as follows: children between 4–7 kg received half a tablet per dose, those between 7–13 kg 1 tablet, 13–24 kg 2 tablets per dose and children between 24–35 kg 4 tablets. The other group was assigned to AL (Novartis Pharma, Switzerland). Each tablet contained 20 mg artemether and 120 mg lumefantrine. Patients received treatment according to bodyweight; i.e. children between 5–14 kg received one tablet per dose, those between 15–24 kg two tablets and those between 25–34 kg received three tablets per dose. Doses were given twice daily. All treatments were given with a glass of milk under direct supervision at the clinic or, for the 2nd dose of AL, at home.\n\nBODY.MATERIALS AND METHODS.OUTCOMES: EFFICACY:\nEfficacy was assessed using the WHO in vivo test with a follow-up period of 28 days [22]. At enrollment (day 0) a full clinical examination was performed; information was recorded on a case record form. At initial diagnosis (day 0) and during follow-up (day 1, 2, 3, 7, 14, and 28), finger prick blood samples were collected for microscopy, measurement of haemoglobin level and molecular analysis. Haemoglobin was measured with Hemocue 201+ analyser and cuvettes (HemoCue diagnostics B.V. Waarle, The Netherlands). Response to treatment was measured and defined according to WHO guidelines [22]. Patients showing complications or treatment failure were treated with appropriate supportive therapy. Children developing danger signs or severe malaria on day 1 or 2 of the study were withdrawn from the study, referred to the hospital, and given alternative treatment. Adverse events were recorded on the case record forms. An AE was defined as an unfavourable and unintended symptom, sign or disease. A serious adverse event (SAE) was defined as a symptom or sign that is temporally associated with the drugs administered to the patient that is life threatening or results in hospitalization, permanent and significant disability or death. SAE's were immediately reported to the ethical committee of KEMRI and the drug safety department of Sigma-Tau, Italy.\n\nBODY.MATERIALS AND METHODS.OUTCOMES: PARASITE CLEARANCE AND GAMETOCYTE DYNAMICS:\nParasite clearance and gametocyte dynamics were assessed microscopically as well as with quantitative nucleic acid sequence based amplification assay, QT-NASBA.\n\nBODY.MATERIALS AND METHODS.MICROSCOPY:\nGiemsa-stained thick and thin smears were prepared according to WHO guidelines. Two independent experienced microscopists, who were blinded to the treatment and clinical status of the patient, examined the smears for the presence of parasites and identified the observed parasite species. Parasitaemia was determined by counting the number of parasites against 200 leukocytes for the asexual stages (assuming that there are 8,000 leukocytes/μl blood). The presence of gametocytes was examined against 500 leukocytes.\n\nBODY.MATERIALS AND METHODS.QT-NASBA:\nFinger prick blood (50 ul) for NASBA analysis was collected on Whatman 903 filter paper (Whatman international Ltd. Maidston, United Kingdom) and air-dried at room temperature. Nucleic acid extraction was performed as previously described by Boom et al [23]. Real-time 18S rRNA QT-NASBA was applied to study asexual parasite clearance below microscopical threshold [20]. In order to quantify the number of parasites in blood, a 10 fold serial dilution of 106 to 10 in vitro cultured parasites/ml was used as reference and processed and analysed with NASBA. Furthermore, to assess prevalence of gametocytes below the detection limit of microscopy, QT-NASBA targeting Pfs25 mRNA as described by Schneider et al [20] was used on blot spots collected during follow-up.\n\nBODY.MATERIALS AND METHODS.GENOTYPING:\nIn order to discriminate between re-infection (RI) and recrudescence (RE), merozoite surface protein 1 and 2 (msp1 and msp2) and glutamate rich protein (GLURP) genotyping was performed as described by Snounou [24] on blood spots obtained at primary (day 0) and secondary infection (time point of re-occurrence). Blood spots were collected on Whatman 903 filter paper (Whatman international Ltd. Maidston, United Kingdom) and air-dried at room temperature for PCR analysis. DNA was isolated as described by Boom et al [23]. Molecular analysis was performed at Royal Tropical Institute, Amsterdam and was done blinded from the treatment that was given to the patients.\n\nBODY.MATERIALS AND METHODS.SAMPLE SIZE AND STATISTICAL ANALYSIS:\nThe aim of the study was to compare gametocytaemia after AL and DP and to compare assessment of gametocytaemia by microscopical examination versus QT-NASBA All data were entered in excel and analysed with SPSS for windows (version 12.0). Parasite densities were analysed after natural log-transformation. Where appropriate, proportions were compared with the χ2-test and means were compared with the one-way ANOVA or Student t-test. A simplified trapezoid area under the curve (AUC) analysis using gametocyte data from days 0, 3, 7, 14 and 28, as a surrogate for the infectiousness of the participants in the different treatment groups, was performed.\n\nBODY.RESULTS.PATIENT RECRUITMENT:\nIn total 1882 cases suspected of uncomplicated malaria were screened for eligibility into the study during an 8-week recruitment period in April and May 2007. 1,736 children were excluded because they did not meet the inclusion criteria (Figure 1). 146 patients fulfilling the inclusion criteria entered the study; 73 were randomly allocated to the DP arm and 73 to the AL arm. Both study groups were comparable at baseline for their demographical and clinical characteristics and parasite densities (Table 1). On completion of follow up (day 28) data of 134 patients (92%) were available for analysis. Table 1 Baseline characteristics of patients included in the study at the time of enrollment in the study Characteristic DHA-PQP (n = 73) ALN (n = 73) Sex ratio male:female 33:40 40:33 Age (in months), median (IQR) 60 (44) 52 (44) Body weight, mean kg (range) 17.62 (6–37) 17.32 (6–42) Temperature, mean °C ± SD 38.1 ± 0.99 37.8 ± 0.73 Haemoglobin mmol/L ± SD 6.33 ± 1.29 6.28 ± 1.27 Parasites/μl geometric mean (range) as determined by microscopy 12145 (1000–72640) 13379 (1080–72000) Figure 1Schematic representation and flowchart of the study. Twelve patients did not reach the study endpoint. Seven patients were lost during follow up, one was unable to take oral medication, one developed severe anaemia, one did not receive the proper drugs, one withdrew from the study and one patient died.\n\nBODY.RESULTS.TREATMENT OUTCOME:\nThere were no early treatment failures during the first three days of follow up. Only one patient in the AL arm had a recurrent parasitaemia (43,880 parasites/μl) at day 28 of follow up. Genotyping analysis revealed that this patient had a reinfection with P. falciparum. All other 133 patients who completed follow-up had an adequate clinical and parasitological response. After one day of treatment, over 90% of the patients had no microscopically detectable asexual parasites. In the AL group no parasites could be detected with microscopy in any of the patients at day two. One patient was still microscopically positive at day two in the DP group with 40 parasites/μl, but this patient was also microscopically negative at day 3. The parasite reduction ratios at 48 hours reproduction cycle (parasite count on admission/parasite count at 48 hours) was 8.96 * 105 at 48 hours for the AL treatment and 2.06 * 104 at 48 hours for the DP treatment. NASBA was also applied to monitor parasite dynamics below sub-microscopical level. Humidity in some of the filter papers degraded the RNA in the blood spots of some of the samples. This led to several extraction failures. In order to have a clear picture of parasite dynamics only those series with a full range of follow-up samples, i.e. 56 DP and 54 AL treated patients, were analysed. Both treatment arms showed a steep decline in parasitaemia from the day of enrollment (day 0) to day 1; 62% reduction after DP treatment and 89% reduction in the AL arm. At day 2, the level of parasitaemia was reduced to 1.2% in the AL group and 2.75% in the DP group.\n\nBODY.RESULTS.HB CONVALESCENCE, FEVER CLEARANCE AND ADVERSE EVENTS:\nAt baseline Hb levels in both treatment groups were comparable (Table 1). At day 28 all groups had a significant increase of Hb however no significant difference between the treatments on the Hb convalescence was found. Final mean Hb levels were 7.15 mmol/l ± 1.07 for the DP treatment group and 6.79 mmol/l ± 1.24 for the AL group. A possible influence of anaemia on gametocyte carriage at enrollment was not observed in the present study (p > 0.05). Fever clearance was defined as the time from receiving the assigned treatment to the time a normal body temperature was recorded (≤ 37.5°C), in study cases who presented with fever. Fever clearance was rapid in both study groups. On day 1, 10 cases (13.7%) in the DP group presented with fever and six cases (8.2%) were observed in the AL group. In the DP group fever was observed on day 2 in four cases (5.5%) and three cases (4.1%) on day 3. In the AL group cases with fever also presented on day 2 (12 study subjects, 16.4%) and on day 3 (two individuals, 2.7%). Fever was not observed during follow-up after day 7, with the exception of the child that presented with a P. falciparum reinfection on day 28 in the AL group and the child that developed broncho-pneumonia (case presented below). Furthermore the presence of fever at recruitment was no predictor for gametocyte carriage (p > 0.05) Most adverse events were mild, self limiting and consistent with symptoms of malaria. There was no significant difference between the two study groups (Table 2). One patient died. The child (63 months) had been ill for two weeks prior to presentation at the clinic. Plasmodium falciparum infection with parasitaemia of 20,120 parasites/μl was diagnosed. There was a fever (38.6°C), but there were no other complaints and no signs of severe anaemia (Hb: 6.4 mmol/L). On day 3, there were no signs of illness. On day 7, the child presented with fever (38.6°C), cough and complaints of anorexia. There was no history of significant illness or allergies. After examination (microscopy was negative for malaria parasites), broncho-pneumonia was diagnosed and the child was treated with oral phenoxymethylpenicillin for five days and Paracetamol syrup. On day 14, the child did not attend the follow-up visit, the parents reported that the child died a day before in a local health post. The event was assessed as unrelated to the study drug. Autopsy was not performed. Table 2 Summary of adverse events recorded during the study Adverse event DHA-PQP ALN P- value Headache 43 (58.9%) 37 (50.7%) 0.318 Abdominal pain 25 (32.4%) 26 (35.6%) 0.862 Weakness 19 (26.0%) 30 (41.1%) 0.035 Anorexia 8 (10.9%) 10 (13.7%) 0.439 Diarrhea 9 (12.3%) 7 (9.6%) 0.785 Cough 16 (21.9%) 17 (23.3%) 0.843 Vomiting 11 (15.1%) 9 (12.3%) 0.806 Pruritis 4 (5.5%) 3 (4.1%) 0.698 \n\nBODY.RESULTS.GAMETOCYTE DYNAMICS:\nThe presence of gametocytes in clinical samples was assessed by microscopy and NASBA and is presented in Table 3. At the start of the study, three patients in the DP arm (4.5%) and six patients in the AL arm (9.0%) carried microscopically detectable gametocytes. Microscopical follow-up of the presence of gametocytes during the whole study period revealed that in total 39 samples (distributed over 13 patients) in the DP arm and 18 samples (distributed over nine patients) in the AL arm carried gametocytes (not significantly different). It was observed that the microscopical detection of gametocytes in blood slides during the study was subjected to fluctuations (for example a case positive on day 0, negative on day 1 and subsequently again positive at day 3), which is probably due to the fact that gametocytes circulate at low levels. However, on day 7, three patients in the DP group and 1 in the AL group showed gametocyte positive slides for the first time. On day 28, in none of the cases gametocytes were observed by microscopy. There was no difference between children older than 60 months and younger as regards carriage of gametocytes and density. Table 3 Occurrence of gametocytes as detected by microscopy or NASBA in the different study groups at the start of the study and during subsequent follow-up. Gametocyte positive samples Day 0 Day 1 Day 2 Day 3 Day 7 Day 14 Day 28 DP group microscopy (n = 67) Total number of positive cases 3 7 7 10 7 5 0 Number of new cases observed 5 2 0 3 0 0 DP group Nasba (n = 56) Total number of positive cases 22 34 a 33 a 17 8 Number of new cases observed 13 8 5 0 AL group microscopy (n = 67) Total number of positive cases 6 3 3 3 2 1 0 Number of new cases observed 1 1 0 1 0 0 AL group Nasba (n = 54) Total number of positive cases 21 20 11 12 5 Number of new cases observed 6 4 6 0 a Number of gametocyte carriers detected with NASBA is significantly higher in the DP treated group compared to the AL treated group. NASBA analysis on 56 DP treated subjects and 54 AL treated subjects detected strikingly more gametocyte carriers at the start of the study compared to microscopy; i.e. 22 study subjects in the DP arm (39.3%) and 21 in the AL (38.9%) were harbouring gametocytes before. The pfs25 NASBA revealed that 34 cases (60.7%) were gametocyte positive in the DP group on day 3, of which 13 cases were newly identified compared to day 0. In contrast, a significantly lower number of patients (20; 37%) were gametocyte positive in the AL treatment group on day 3, of which six new cases. This trend was also observed on day 7: the DP arm had 33 (58.9%) gametocyte positive samples, whereas the AL treated group had a significantly lower number of gametocyte positive samples (11, 20.3%). However on day 14 (AL: 12 positive [22.2%], six new; DP: 17 [30.4%], five new) and day 28 (AL: 5 positive [9.3%], no new cases; DP: 8 [14.3%], no new cases) of follow-up no significant difference in gametocyte carriage was observed between both treatment groups. The AUC (day 0–28) of the two treatment groups was calculated to be 20.0 infectious persons/day for the DP treatment arm and 10.5 infectious persons/day for the AL treatment arm. Stratifying the data for age under and above 60 months showed no difference in either of the groups.\n\nBODY.DISCUSSION:\nSeveral studies have analysed the efficacy and tolerability of AL and DP and all show very good results [1,4,6,7,12,14,25]. In the present study, the two drugs showed to be similar with respect to effectiveness and tolerability compared to other studies. However, most of these studies have a follow up of 42 days which makes a direct comparison of the results difficult. No adverse events other than those related to malaria itself were observed in the current study, which is in line with other reports. In the present study, all children experienced haemoglobin convalescence without difference between the two treatment arms, in contrast to the study of Kamya et al, who found a greater increase in the DP treated patients [4]. The difference in follow-up time and the numbers of patients included in both studies may be responsible for this difference. Further studies with comparable study length should be done to give an answer to these discrepancies. The effects on gametocytaemia and possibly malaria transmission deserve further study. Whereas asexual parasites were cleared in three days after the initiation of the two treatment schedules, gametocytaemia appeared different when assessed by microscopy as well as with NASBA. Gametocytes were present in low numbers throughout follow-up in both study groups. Artemisinin derivatives have in general a negative effect on gametocyte development and survival and thus influence malaria transmission, at least in low transmission areas [15-17]. In this study, the actual infectiousness of the remaining gametocyte populations in both treatment arms was not assessed; the presence of gametocytes does not necessarily mean that they actually contribute to transmission. Several studies have shown that gametocytes persist in a large population of previously infected and treated children [5,19,26,27]. A large proportion of these carriers has a parasite load below microscopical detection limit, a load that can be detected with molecular assays like NASBA. Patients with submicroscopic parasite densities may still be infectious to mosquitoes and may contribute to transmission [14,19], as confirmed with membrane feeding experiments [18,28]. Studies that include reduction of transmission as a component of efficacy of drugs, thus need to incorporate highly sensitive molecular assays to reliably assess gametocyte densities. The present study showed a limited effect of DP on gametocyte development in comparison with AL when a sensitive tool like NASBA is used for gametocyte detection, which could limit the usefulness of DP to areas with low transmission but this finding should be further investigated in larger studies in different study sites with different transmission intensities. It is not clear if plasma concentrations of dihydroartemisinin in the blood could play a role. Dihyrodartemisinin is the major and the active metabolite of artemether. So far, no studies have been performed that compare the plasma levels of dihydroartemisinin when given as such or after administration of artemether and subsequent metabolisation. This should be further investigated together with effect on gametocytogenesis, which should incorporate a sensitive detection tool for gametocytes such as NASBA. The effect that drugs can have on gametocyte clearance as measured with NASBA could have some implications for the introduction drugs and especially the introduction of new drugs. This study showed that with sensitive detection tools a difference in parasite clearance can be observed but these results should be confirmed in larger studies and in other study areas with different malaria transmission intensities. Transmission intensity varies significantly in the different African countries and within a country high and low transmission areas can often be identified. Malaria endemic countries generally have a national malaria drug policy for the whole country. Although this is logical from a practical and logistical point of view, it may not be the best approach for effective malaria control. It could, therefore, be more effective if a country develops specific drug policies to suit regional instead of national requirements.\n\nBODY.COMPETING INTERESTS:\nThe authors declare not to have a conflict of interest. The organizations that provided financial support had no influence in the design, the actual field and laboratory work, analysis and writing.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nPM was involved in the conception of the study, carried out the molecular analysis of the samples, carried out the statistical analysis of the results and drafted the manuscript. PS coordinated the study in the field and was responsible for the clinical examination and treatment of the participants. SvA collected clinical data and blood samples in the field and did the clinical data analysis. SO coordinated the study in the field an arranged the logistics in the field. IV carried out the NASBA assays and the subsequent analysis. HS was involved in the design of the study and contributed to the drafting of the manuscript. PK as involved in the design of the study, critically read and improved the manuscript. All authors read and approved the final manuscript\n\n**Question:** Compared to dihydroartemisinin-piperaquine (AP) what was the result of arthemeter-lumefantrine (AL) on mean Hb?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
268
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomized Clinical Trial of Laparoscopic Versus Open Repair of the Perforated Peptic Ulcer: The LAMA Trial\n\n ABSTRACT.BACKGROUND:\nLaparoscopic surgery has become popular during the last decade, mainly because it is associated with fewer postoperative complications than the conventional open approach. It remains unclear, however, if this benefit is observed after laparoscopic correction of perforated peptic ulcer (PPU). The goal of the present study was to evaluate whether laparoscopic closure of a PPU is as safe as conventional open correction.\n\nABSTRACT.METHODS:\nThe study was based on a randomized controlled trial in which nine medical centers from the Netherlands participated. A total of 109 patients with symptoms of PPU and evidence of air under the diaphragm were scheduled to receive a PPU repair. After exclusion of 8 patients during the operation, outcomes were analyzed for laparotomy (n = 49) and for the laparoscopic procedure (n = 52).\n\nABSTRACT.RESULTS:\nOperating time in the laparoscopy group was significantly longer than in the open group (75 min versus 50 min). Differences regarding postoperative dosage of opiates and the visual analog scale (VAS) for pain scoring system were in favor of the laparoscopic procedure. The VAS score on postoperative days 1, 3, and 7 was significant lower (P < 0.05) in the laparoscopic group. Complications were equally distributed. Hospital stay was also comparable: 6.5 days in the laparoscopic group versus 8.0 days in the open group (P = 0.235).\n\nABSTRACT.CONCLUSIONS:\nLaparoscopic repair of PPU is a safe procedure compared with open repair. The results considering postoperative pain favor the laparoscopic procedure.\n\nBODY.INTRODUCTION:\nThe incidence of perforated peptic ulcer (PPU) has declined over the past several years because of the introduction of anti-ulcer medication and Helicobacter eradication therapy [1, 2]. Nevertheless the incidence and mortality of PPU is 5–10%. The mortality will increase up to 50% if the perforation exists for more than 24 h [3, 4]. There are several options for treatment of PPU, but the preferred treatment is surgery by upper abdominal laparotomy [5, 6]. Mouret et al. published the first results of laparoscopic repair in 1990 [7]. He concluded that it was a good method that probably reduced postoperative wound problems and adhesions. After the success of laparoscopic cholecystectomy and other laparoscopic procedures, it was thought that patients would have less pain and a shorter hospital stay after laparoscopic correction of PPU [8, 9]. Various studies have shown that laparoscopic suturing of the perforation is feasible, but there is still no proof of real benefits of laparoscopic correction [3, 6, 10–12]. Therefore we performed a multicenter randomized trial comparing open correction of PPU with laparoscopic repair.\n\nBODY.METHODS.PARTICIPANTS:\nPatients with symptoms of the clinical diagnosis of PPU were included in nine medical centers in the Netherlands participating in the LAMA (LAparoscopische MAagperforatie) trial between March 1999 and July 2005. Eligible patients were informed of the two surgical approaches and were invited to participate in the study. Exclusion criteria were the inability to read the Dutch language patient information booklet, inability to complete informed consent, prior upper abdominal surgery, and current pregnancy. The ethics committees of all participating institutions approved the trial.\n\nBODY.METHODS.RANDOMIZATION:\nSurgeons contacted the study coordinator after the patients had provided informed consent and randomization took place by opening a sealed envelope. The envelope randomization was based on a computer-generated list provided by the trial statistician.\n\nBODY.METHODS.SURGICAL PROCEDURE:\nAll patients received intravenous antibiotics prior to operation and were allocated for Helicobacter pylori eradication therapy according to established guidelines [13]. The open surgical procedure was performed through an upper abdominal midline incision. Closure of PPU was to be achieved by sutures alone or in combination with an omental patch. After repair of the defect cultures were drawn from the peritoneal fluid, after which the peritoneal cavity was lavaged. During lavage it was permissible to insufflate the stomach to test for leakage of the closed defect. No method was specified for closing the abdomen. Laparoscopic repair was performed with the patient and the team set up in the \"French\" position. Trocars were placed at the umbilicus (video scope) and on the left and right midclavicular line above the level of the umbilicus (instruments). If necessary a fourth trocar was placed in the subxiphoid space for lavage or retraction of the liver. Surgeons were free to use either 0° or 30° video scopes for the procedure. The rest of the procedure was identical to that described above for open repair. No method was specified for closing the trocar incisions.\n\nBODY.METHODS.POSTOPERATIVE FOLLOW-UP:\nPostoperative pain was scored by means of a visual analog scale (VAS) for pain on days 1, 3, 7, and 28 ranging from 0 (no pain) to 10 (severe pain). In addition, the days during which opiates were used by the patients were registered. All complications, minor and major, were monitored. The treating surgeons determined time of discharge on the basis of physical well-being, tolerance of a normal diet, and ability to use the stairs. For this reason, this was an unblinded trial. Postoperative hospital stay without correction for time spent in hospital as a result of non-medical reasons (inadequate care at home) was calculated. Patients were invited to attend the outpatient clinic at 4 weeks, 6 months, and one year postoperatively. They were asked to complete forms related to pain and use of analgesics.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData analysis was carried out according to the intention-to-treat principle as established in the trial protocol. Data were collected in a database, and statistical analyses were performed with the Statistical Package for Social Sciences for Windows (SPSS 15.0, SPSS Inc., Chicago, IL). A researcher blinded to the nature of the procedures performed all data analyses. The primary outcome of the trial was duration of hospital stay. The power analysis was performed on basis of a reduction in hospital stay by 1.5 days (10–8.5 days from admission) in favor of the laparoscopically treated group using a β of 0.80 and an α of 0.05. This resulted in a trial size of 50 patients per group. The Pearson chi-squared test was used to compare categorical variables, and the Mann-Whitney U-test was used to compare continuous variables as we could not assume normal distribution because of the relatively small numbers. In Tables 1–6 medians and interquartile ranges (IQR) are reported. All data were analyzed according to the intention-to-treat principle; i.e., patients remained in their assigned group even if during the procedure the surgeon judged the patient not to be suitable for the technique assigned or if conversion was required. Null hypotheses were tested two-sided and a P value of 0.05 or less was considered statistical significant.Table 1Baseline parametersLaparoscopic repairOpen repairP valuen = 52 n = 49 Male:female ratio1.3:11.9:1Median age (years) + IQR 66 (25.8)59 (29.5)0.185Median BMI (kg/m2) + IQR23 (4)22 (5)0.118Median duration of symptoms (h) + IQR11 (17)11 (19)0.948Median blood pressure systolic (mmHg) + IQR125 (38.5)130 (36.5)0.457Median blood pressure diastolic (mmHg) + IQR75 (25.5)75 (24.5)0.596Median heart rate (beats/min) + IQR 88 (34.0)92 (21)0.403Median body temperature (°C) + IQR 36.9 (0.92)36.8 (1.5)0.658Mannheim Peritonitis Index + IQR19.5 (8.25)16 (14)0.386Median white cell count (×109/l) + IQR 12.1 (8.9)12.1 (7.75)0.467Median ASA score + IQR 1.0 (1.0)1.5 (1.0)0.902IQR interquartile range, difference between 25th percentile and 75th percentile; BMI body mass indexASA American Society of Anesthesiologists Association score\n\nBODY.RESULTS.PATIENTS:\nA total of 109 patients were included in the trial based on a high suspicion of PPU (Fig. 1). Eight patients were excluded during operation because no gastric perforation was detected or a defect in other parts of the digestive tract was uncovered. Data for these patients were not collected and the patients were excluded from further analysis. The remaining 101 patients made up the study population; their baseline parameters are given in Table 1. Fifty-two patients were randomized for laparoscopic repair and 49 for open repair of the perforation. Forty patients were female. The mean age of the patients was 61 years. The BMI (body mass index) was equally distributed between the groups, with a median of 22.5. Patients in both groups had been suffering from symptoms for a mean duration of 11 h, and those in the laparoscopy group presented with a median Mannheim Peritonitis index [14] of 19.5, whereas those in the open group had a median Mannheim Peritonitis index of 16.Fig. 1Patient flow chart Thirty patients reported the use of non-steroidal anti-inflammatory drugs (NSAIDs; 17 laparoscopic versus 13 open), and 10 patients used proton pump inhibitors (6 laparoscopic versus 4 open). Patient history revealed gastric ulcer disease in 19 patients.\n\nBODY.RESULTS.INTRAOPERATIVE FINDINGS:\nThe discovered ulcer perforations were found to have a mean diameter of 10 mm, which did not differ between groups (Table 2). Location of the perforated ulcers was distributed equally between groups. Defects were located in the prepyloric region (n = 41), the postpyloric region (n = 34), and at the pylorus (n = 20). The median volume of lavage fluid used was 1,000 ml (range: 100–5,000 ml). The surgeon decided the amount of lavage used. There was no consensus on how much was necessary. Median blood loss did not differ between groups. Skin-to-skin time differed by 25 min, favoring open repair of PPU (Table 2).Table 2Intraoperative findingsLaparoscopic repairOpen repairP valuen = 52n = 49Median size of perforation (mm) + IQR10.0 (7.0)7.0 (6.0)0.379Number of patients with defect Pyloric812 Postpyloric2014 Prepyloric1922 Median volume of lavage (ml) + IQR1,000 (1,500)1,000 (1,425)1.000 Median bloodloss (ml) + IQR10.0 (40.0)10.0 (50.0)0.423 Skin to skin time (min) + IQR75 (47.2)50 (25.5)0.000\n\nBODY.RESULTS.INTRAOPERATIVE COMPLICATIONS:\nConversion to open surgery was required in four patients (8%). Reasons for conversion included the inability to visualize the ulcer defect because of bleeding (n = 1/52; 2%), inability to reach the defect because of perforation in the vicinity of the gastroduodenal ligament and because of a dorsal gastric ulcer (n = 2/52; 4%), and inability to find the perforation (n = 1/52; 2%).\n\nBODY.RESULTS.POSTOPERATIVE COMPLICATIONS:\nComplications were statistically equally distributed between the two groups (Table 3). There were 12 complications in 9 patients in the laparoscopic group and 24 complications in 15 patients in the open group. Mortality was 4% in the laparoscopic group and 8% in the open group. In the laparoscopic group death was caused by sepsis due to leakage at the repair site. In the open group 3 patients died because of pulmonary problems (ARDS, pneumonia), and 1 patient died after complications following a cerebrovascular accident (CVA) combined with respiratory insufficiency.Table 3Postoperative complicationsLaparoscopic repairOpen repairP valuen = 52n = 49Pneumonia21Respiratory insufficiency13ARDS1Cardiac problems22Sepsis31Leakage at repair site2Abscess3Ileus1Fascial dehiscence1Wound infection3Urinary tract infection2Incisional hernia1Cerebrovascular accident1Death24Total complications12240.061Total of patients with complications ≥19 (18%)15 (36%)\n\nBODY.RESULTS.DISCHARGE:\nTime to discharge was similar for the two groups, with a median difference of 1.5 days (Table 4). Nasogastric decompression could be stopped after 2–3 days in both groups (Table 4).Table 4Duration of hospital stay, nasogastric decompressionLaparoscopic repairOpen repairP valuen = 52n = 49Median hospital stay (days) + IQR6.5 (9.3)8.0 (7.3)0.235Median duration of nasogastric decompression (days) + IQR2.0 (3.0)3.0 (1.3)0.334\n\nBODY.RESULTS.PAIN:\nVisual analog pain scores were in favor of laparoscopic repair (Table 5; p < 0.005). Although the median duration of opiate use in the two groups was 1.0, the mean duration in the open group was found to be 0.6 days longer than in the laparoscopic group (Table 6).Table 5Postoperative painMedian VAS pain scoreMedian VAS pain scoreP valueLaparoscopic repairOpen repairDay 1 + IQR3.8 (3.0)5.15 (2.5)0.001Day 3 + IQR2.1 (2.5)3.0 (2.4)0.035Day 7 + IQR1.0 (2.0)1.85 (2.8)0.036Day 28 + IQR0.3 (0.7)0.0 (1.7)0.748Table 6Postoperative opiate usageOpiate requirementOpiate requirementP valueLaparoscopic repairOpen repairMedian duration (days) + IQR1.0 (1.25)1.0 (1.0)0.007Mean duration (days) ± SD1.0 ± 0.91.6 ± 0.90.007\n\nBODY.RESULTS.VAS APPEARANCE OF SCAR:\nThe VAS score for appearance of the scar left by the respective procedures (subjectively recorded in the same way as pain) differed by 2.3 points, favoring the laparoscopic procedure (7.7 vs. 5.4; P = 0.033)\n\nBODY.DISCUSSION:\nThe need for surgery for PPU has declined enormously in Europe and America with reported rates ranging from 50% to 80%, thanks to effective gastric acid-reducing medication [15]. For this reason, as well as because many surgeons prefer upper laparotomy, it took more time than expected to include 100 patients in our study. Reasons given by surgeons who prefer open repair were that it is a more familiar procedure and it can be completed faster than laparoscopy. It was also noted that patients often undergo operation at night, when the surgeon on call was not always laparoscopically trained. Other randomized trials have already shown the feasibility of laparoscopic repair of PPU [3, 4, 6, 10]. Only a few had more than 100 patients, and some emphasized results from subgroups of patients [8, 11, 12]. We did not subdivide our patients and included patients with risk factors for developing sepsis or conversion [10]. In eight of the original 109 patients (7%) it became evident during the operation that the patient had a diagnosis different from PPU. In the patients who were randomized for laparoscopy this discovery revealed the benefit of laparoscopy as a diagnostic procedure indicating either an upper or lower abdominoplasty or continuation of the laparoscopy [16]. Conversion rate in the laparoscopy group was 8% (4/52). This is much lower than that reported in literature, where conversion rates as high as 60% were found [3, 4, 6]. This maybe partially explained by the fact that only trained and experienced laparoscopic surgeons (those performing at least 50 laparoscopic procedures a year) participated in this trial, confirming the belief that this procedure should only be done by experienced surgeons [3–5]. Operating time was significantly longer in the laparoscopy group (75 min versus 50 min), which is comparable to reports in the literature [3, 10]. A possible explanation for the longer operative time is that laparoscopic suturing is more demanding [9, 17], especially if the edges of the perforation are infiltrated and friable. Sutures easily tear out and it is more difficult to take large bites and to tie knots properly. Use of a single-stitch method described by Siu et al. [18], fibrin glue, or a patch might solve this problem [12, 19]. Another reason for the increase in operating time is the irrigation procedure. Irrigating through a 5-mm or even a 10-mm trocar is time consuming, and suction of fluid decreases the volume of gas and reduces the pneumoperitoneum. There is no evidence that irrigation lowers the risk of sepsis [20], so it might only be necessary if there are food remnants in the abdomen; perhaps there is no need for it at all. One of the suspected risks of laparoscopic surgery is that of inducing sepsis by increasing bacterial translocation while establishing a pneumoperitoneum [6]. This danger could not be confirmed in our trial. Furthermore data suggest that there is a decrease in septic abdominal complications when laparosopic surgery is used [8]. Evidence already exists that laparoscopic correction of PPU causes less postoperative pain [6, 12, 17, 18]. The meta-analysis published by Lau [6] showed that eight out of ten studies showed a significant reduction in dosage of analgesics required in the laparoscopic group. Also, the three studies that had included VAS pain scores showed consistently lower pain scores, as was observed in our study as well. Whether this will lead to a better quality of life for patients, especially during the first weeks after surgery still needs to be analyzed. Although patients in our series who underwent laparoscopy had less postoperative pain, there was no difference in the length of hospital stay in our two groups. In fact, hospital stay overall in our patients was very long. This was most likely caused by the fact that many patients, especially the elderly, could not be discharged because of organizational reasons. Of the 101 patients, 41% were 70 years or older (24 in the laparoscopic group versus 17 in the open group). It appears that the age of PPU patients is increasing, and this will eventually represent a significant problem in the future [2, 3]. One benefit of the laparoscopic procedure not often mentioned in literature [6] is cosmetic outcome. Nowadays patients are aware of this benefit, and sometimes this is the reason why they demand laparoscopic surgery. In conclusion, the results of the LAMA trial confirm the results of other trials that laparoscopic correction of PPU is safe, feasible for the experienced laparoscopic surgeon, and causes less postoperative pain. Operating time was longer in the laparoscopic group and there was no difference in length of hospital stay or incidence of postoperative complications.\n\n**Question:** Compared to Conventional open approach what was the result of Laparoscopic surgery on Need for opiates?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
241
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A prospective, contralateral comparison of photorefractive keratectomy (PRK) versus thin-flap LASIK: assessment of visual function\n\n ABSTRACT.PURPOSE:\nTo compare differences in visual acuity, contrast sensitivity, complications, and higher-order ocular aberrations (HOAs) in eyes with stable myopia undergoing either photo-refractive keratectomy (PRK) or thin-flap laser in situ keratomileusis (LASIK) (intended flap thickness of 90 μm) using the VISX Star S4 CustomVue excimer laser and the IntraLase FS60 femtosecond laser at 1, 3, and 6 months postoperatively.\n\nABSTRACT.METHODS:\nIn this prospective, masked, and randomized pilot study, refractive surgery was performed contralaterally on 52 eyes: 26 with PRK and 26 with thin-flap LASIK. Primary outcome measures were uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), contrast sensitivity, and complications.\n\nABSTRACT.RESULTS:\nAt 6 months, mean values for UDVA (logMAR) were −0.043 ± 0.668 and −0.061 ± 0.099 in the PRK and thin-flap LASIK groups, respectively (n = 25, P = 0.466). UDVA of 20/20 or better was achieved in 96% of eyes undergoing PRK and 92% of eyes undergoing thin-flap LASIK, whereas 20/15 vision or better was achieved in 73% of eyes undergoing PRK and 72% of eyes undergoing thin-flap LASIK (P > 0.600). Significant differences were not found between treatment groups in contrast sensitivity (P ≥ 0.156) or CDVA (P = 0.800) at postoperative 6 months. Types of complications differed between groups, notably 35% of eyes in the thin-flap LASIK group experiencing complications, including microstriae and 2 flap tears.\n\nABSTRACT.CONCLUSION:\nUnder well-controlled surgical conditions, PRK and thin-flap LASIK refractive surgeries achieve similar results in visual acuity, contrast sensitivity, and induction of HOAs, with differences in experienced complications.\n\nBODY.INTRODUCTION:\nRefractive surgery is one of the most commonly performed elective procedures and will likely maintain its popularity as ablation techniques become more refined and understanding of corneal wound healing improves. Two of the most common methods of refractive surgery are photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK). The rapid improvement in vision and lack of postoperative pain associated with LASIK has made this the preferred option with patients compared with PRK, which has greater postoperative discomfort and prolonged recovery of visual acuity.1 Recently, there has been renewed interest in PRK because of increasing concerns of complications associated with LASIK flap creation, including dry eye, corneal ectasia, and flap tears.2–5 Thin-flap LASIK attempts to gain benefits of both techniques by creating a flap of between 80 and 90 μm.6–8 Use of a thinner flap results in a more biomechanically stable cornea and decreases incidence of ectasia given the thicker residual stroma.3,9 Cutting a thinner LASIK flap is less invasive to the nerves within the corneal stroma, decreasing the severity and duration of dry eye, possibly by preserving corneal sensation and blinking rate.10–14 Flap creation avoids corneal epithelium removal, allowing reduced healing time and less haze and scarring.15 The present contralateral study compares the outcomes of eyes that have undergone PRK or thin-flap LASIK using the VISX STAR S4 excimer laser (VISX Incorporated, Santa Clara, CA), with flaps created with intended thicknesses of 90 μm using the IntraLase FS60 femtosecond laser (Abbott Medical Optics [AMO], Santa Ana, CA).\n\nBODY.METHODS:\nData from myopic eyes were analyzed, with or without astigmatism, in which the dominant eye was randomized (Research Randomizer software – Urbaniak, www.randomizer.org) to PRK or thin-flap LASIK (90 μm flap) and the nondominant eye underwent the alternative treatment. All PRK and thin-flap LASIK treatments were performed using the VISX Star S4 CustomVue laser at the John A. Moran Eye Center, Salt Lake City, Utah, between February 2008 and July 2009. All surgeries were overseen by two surgeons (M.M., M.D.M.). The research protocol was approved by the University of Utah Hospital Institutional Review Board. All patients included in this study met the US Food and Drug Administration guidelines for VISX CustomVue LASIK. Mean age of patient, 13 men and 13 women, was 30.8 years (range: 23–46). Twenty-six patients (52 eyes) with stable myopia (1.5–8.5 diopters [D]) and astigmatism (0.242–3.11 D) were enrolled in the study. Eleven patients excluded from this study had clinically significant lens opacities, previous corneal or intraocular surgery, keratoconus, unstable refraction, autoimmune disease, immunosuppressive therapy, or were pregnant or breastfeeding. Correction was made for distance and patients desiring monovision correction were excluded. Contact lenses were discontinued 2 weeks prior to screening for soft contact lens wearers and 6 weeks prior to screening for rigid gas permeable lens wearers. All patients had a preoperative examination including assessment of uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), tonometry, slitlamp examination of the anterior segment, and dilated fundus examination. Manifest and cycloplegic refractions were repeated on 2 separate visits to ensure reliability and stability. Corneal topography and thickness were measured using the Orbscan II v.3.0 (Bausch and Lomb, Rochester, NY). All eyes received 5 preoperative wavefront analyses with the VISX CustomVue WaveScan aberrometer v.3.62 (Fourier) (AMO), without pharmacologic intervention, under mesopic conditions, with a minimum pupil diameter of 6.0 mm. The contralateral study design was made so that each eye could act as a control for the fellow eye in each patient, allowing for study groups to be well matched. There were no violations in the randomization; all patients were analyzed as originally assigned. The randomization protocol was generated before the trial and known only to the study coordinator. In all patients, the emmetropic correction target was based on manifest refraction and wavefront analysis. All flaps were created with the IntraLase FS60 femtosecond laser at 60 kHz in a raster pattern with bed energy of 1.15 μJ, side-cut energy of 2.00 μJ, and pocket enabled. The flaps were created with an intended thickness of 90 μm, diameter of 8.4 to 9.0 mm, superior hinge angle of 55°, and a side-cut angle of 70°. Intraoperative pachymetry or optical coherence tomography were not performed to evaluate actual flap thicknesses. If the 8.0 mm maximum intended ablation diameter exceeded the flap diameter, the hinge and flap were shielded during ablation. Postoperatively, each eye undergoing thin-flap LASIK received 1 drop of gatifloxacin 0.3% (Zymar; Allergan Inc, Irvine, CA), prednisolone acetate 1% (Pred Forte, Allergan Inc), ketorolac tromethamine 0.4% (Acular LS, Allergan Inc.), and a bandage soft contact lens (Softlens Plano T, Bausch and Lomb, Rochester, NY). The prednisolone acetate was continued hourly during the first preoperative day and 4 times daily for an additional 6 days. The gatifloxacin was continued 4 times daily for 1 week. In eyes undergoing PRK all eyes had their corneas cooled with 15 mL of BSS (2.8–3.9°C) immediately following ablation. This was followed by 1 drop of a gatifloxacin 0.3% (Zymar), prednisolone acetate 1% (Pred Forte), ketorolac tromethamine 0.4% (Acular LS) and a bandage soft contact lens (Softlens Plano T). Ketorolac tromethamine was continued 4 times a day for 3 days and then discontinued. Gatifloxacin and prednisolone acetate were continued 4 times a day for 1 week with a subsequent steroid taper over 2 to 3 months per surgeon preference. Mitomycin C was not administered to any patient in the study at any time. Both bandage soft contact lenses were removed simultaneously once re-epithelialization was complete, typically on postoperative days 3 to 5. Patients were seen 1 day, 1 week, 1 month ± 10 days, 3 months ±14 days, and 6 months ±14 days. At all follow-up examinations, UDVA and CDVA were tested using a standard Snellen eye chart. Visual acuity was recorded in both Snellen notation and logarithm of the minimum angle of resolution (logMAR) format. Contrast sensitivity was measured in controlled mesopic conditions at 3, 6, 12, and 18 cycles per degree (cpd) using the Vectorvision CSV-1000E chart (Vectorvision, Greenville, OH). Higher-order aberrations (HOAs), including coma Z(3,1), trefoil Z(3,3), and spherical aberration Z(4,0), were measured using the CustomVue WaveScan at a mean diameter of 6 mm. Undilated scans of both eyes were taken preoperatively and 1, 3, and 6 months postoperatively. Primary outcome measures were UDVA, CDVA, contrast sensitivity, and complications. HOAs were measured and trended within groups as secondary measures. After the study was completed, the results were compiled and the data unmasked for statistical analysis. Refractive error, visual acuity, and HOAs were treated as continuous variables and analyzed for significance by independent t-tests. In all tests, P values <0.05 were considered statistically significant. Data analysis was done using Microsoft Excel (Microsoft Corp, Redmond, WA).\n\nBODY.RESULTS:\nMean preoperative measurements of UDVA, CDVA, sphere, and cylinder are shown in Table 1. 25 of 26 patients (50 eyes) completed the study at postoperative 6 months. One eye in the thin-flap LASIK group required PRK retreatment following a flap tear and both eyes from this patient were therefore removed from analysis of visual acuity, contrast sensitivity, and HOAs as the retreatment prevented the ability to distinguish results between the 2 surgical methods. The eyes from this patient were still included in the analysis of complications.\n\nBODY.RESULTS.VISUAL ACUITY:\nTable 2 shows visual acuity outcomes at 1, 3, and 6 months postoperatively. Statistically significant differences were found between PRK and thin-flap LASIK in UDVA at 1 month postoperatively, with thin-flap LASIK eyes showing more improvement in UDVA. Visual acuities were not statistically different between the groups at 3 or 6 months.\n\nBODY.RESULTS.STABILITY, EFFICACY, AND PREDICTABILITY:\nTable 3 shows stability, efficacy, and predictability outcomes postoperatively at 1, 3, and 6 months. CDVA was statistically different between groups at 1 month, with 24% of the PRK group losing a line or more from preoperative values, while 9% of eyes in the thin-flap LASIK group lost only 1 line at 1 month. No eyes in the thin-flap LASIK group lost more than 1 line. Also, 39% of eyes in the thin-flap group gained a line by 1 month compared with only 12% of eyes in the PRK group. At 6 months 64% and 56% of eyes had gained a line or more of CDVA in the PRK and thin-flap LASIK groups, respectively (P = 0.462).\n\nBODY.RESULTS.CONTRAST SENSITIVITY:\nContrast sensitivity measurements at 3, 6, 12, and 18 cycles per degree (cpd) in each group are shown in Figure 1. There were no differences between groups at any cpd at any time in the study (P ≥ 0.156). The thin-flap LASIK group showed no change in contrast sensitivity postoperatively (P > 0.131), while patients in the PRK group had a slight decrease in contrast sensitivity at 1 month seen at 3 and 12 cpd (P = 0.004) and (P = 0.025), respectively. At 6 months contrast sensitivity in the PRK group was still significantly decreased from baseline at 3 cpd (P = 0.013), although it did not reach a statistically significant difference at 3 months (P = 0.101).\n\nBODY.RESULTS.COMPLICATIONS:\nTypes of complications differed between the 2 groups. In the PRK group, 2 cases of epithelial defects occurred by 1 week, but had completely resolved by 6 months. Three eyes in the PRK group had mild haze appearing as early as 1 week postoperatively. Haze remained in only 1 eye at 6 months, but was classified as minimal and had no effect on UDVA or CDVA. Nine eyes (35%) in the thin-flap LASIK group experienced complications. In the thin-flap LASIK group, flap debris (1 eye), diffuse lamellar keratitis (DLK, 1 eye), and an epithelial cyst at the edge of 1 flap were observed, with no loss of UDVA or CDVA, and all resolved by 6 months. Microstriae were observed in 6 eyes, one of which was the eye described above with flap debris and the other was the eye with DLK, with no associated loss of UDVA or CDVA, with epithelial proliferation noted as filling the microstria and making them less apparent. Two eyes in the thin-flap LASIK group experienced flap tears intraoperatively – one resulting in mild flap edge scarring by 6 months that had no significant effect on visual function, and the other case affecting vision at 1 month postoperatively which was retreated with PRK at 3 months. As a result of the retreatment with the counter surgical technique, the ability to accurately compare visual acuity, contrast sensitivity, and HOAs between the 2 surgical methods was limited and both eyes from this patient were removed from analysis of these measures, but were still included in the analysis of complications.\n\nBODY.RESULTS.HIGHER-ORDER ABERRATIONS:\nAt postoperative 1, 3, and 6 months, 24 (96%), 25 (100%), and 24 (96%) eyes, respectively, in each group completed CustomVue WaveScan analysis. Total root-mean square (RMS) HOAs, coma, trefoil, and spherical aberrations are compared in Figure 2. There were no significant differences between groups in any HOAs throughout the study (P ≥ 0.101), with all P values at 6 months ≥0.63. In both groups, total HOAs (P < 0.008), spherical (P < 0.002), and coma (P = 0.008 at 3 months; P = 0.024 at 6 months) aberrations were significantly increased compared with preoperative conditions. Trefoil showed no significant change throughout the study in either group (P = 0.298).\n\nBODY.DISCUSSION/CONCLUSION:\nThe present study confirms that PRK and thin-flap LASIK are effective surgeries for the correction of low to moderate myopia. Although thin-flap LASIK showed superior visual results in the early postoperative period there was no statistically significant difference in outcomes of UDVA, CDVA, contrast sensitivity, or total RMS HOAs between PRK and thin-flap LASIK by 6 months. In a similar study comparing PRK and thin-flap LASIK, Slade et al also found that UDVA results were better in the thin-flap group early on and equalized by 6 months.16 Our study showed a similar trend, with no significant differences in any of the primary outcomes at 6 months, and with no difference in UDVA at 3 months. Visual regression in our study was similar to outcomes in Slade's study in which 42% of the PRK group lost a line or more of CDVA and 22% of the thin-flap LASIK group lost 1 line at 1 month postoperatively. Despite the use of custom ablation, postoperative increases in total HOAs, sphere, and coma were noted in our study, as also seen by Slade et al, although they noted that the increase in sphere and coma aberrations was significantly higher in the PRK group at 1 and 3 months postoperatively. As found in previous studies, there was no significant change found in trefoil at any time postoperatively.17,18 Our study showed no difference in induction of HOAs between groups at any time. Although increases in HOAs after refractive surgery have been correlated with decreases in contrast sensitivity in other studies, we demonstrate that increases in total RMS, sphere, and coma were seen postoperatively in both groups without a reliable decrease in contrast sensitivity.19,20 Slade's group found that contrast sensitivity was better in the thin-flap group at all postoperative points in the study, which may have been related to their finding of lower induction of sphere and coma aberrations in the thin-flap group compared with the PRK group. The authors recognize that the Slade study had a larger population size (n = 50 per group) and would have increased power to detect significant differences. Our study would have had increased power of analysis with a similar study group size, but results from analysis of HOAs would not likely change as P values for all HOAs at 6 months were ≥0.63. It would be difficult to make any such correlation between contrast sensitivity and HOAs from the results of this study. A loss of CDVA has been associated with the development of corneal haze in other studies, but as mentioned above none of the patients with visual regression developed haze.21–23 Findings in other studies showing that the biomechanics of eyes that have received thin-flap LASIK treatment are indistinguishable from those of PRK have led to suggestions that thin-flap LASIK is the best approach to LASIK.16 Although the present study did not find any statistically significant differences between thin-flap LASIK and PRK in terms of visual quality at 6 months, complications dealing with flap integrity in the thin-flap LASIK group were present which are not complications found in PRK. Although PRK remains a viable option for those unable to undergo LASIK, the use of thinner flaps may eliminate some of the complications seen with traditional LASIK. Larger studies are needed to better compare the complication rates of both methods and to determine how effective thin-flap LASIK will be in achieving the benefits of PRK and LASIK while avoiding the risks associated with each method. While thinner LASIK flaps attempt to preserve the biomechanical stability of the corneal stroma, at the same time, the flap itself becomes less stable, as was noted with the 2 flap tears and other complications occurring in the thin-flap LASIK group in this study. A study by Espandar and Meyer24 showed that most complications in flaps created by IntraLase femtosecond laser occurred at the hinge, which is where the 2 flap tears that occurred in this study. A thinner flap hinge would be biomechanically less stable and would increase the likelihood of intraoperative flap tear occurrence as well. Six of the 9 eyes with complications in the thin-flap LASIK group had microstriae, which are caused by the flattening of a weak corneal flap unable to maintain its curvature over the small area of stroma removed during ablation. The biomechanics of the flap and hinge, however, cannot be evaluated by the design of this study as analysis was done based on intended flap thickness, which has been shown to vary with the IntraLase FS60 femtosecond laser.25 The study could have been strengthened had intraoperative pachymetry or OCT been performed. Creating a flap with increased integrity would help prevent microstriae from forming and would also provide for a stronger hinge that would be less susceptible to flap tear. Possible ways to optimize flap integrity include modification of hinge and side-cut angle creation, as well as improved planarity and microdisruption of flap edges. This will allow improved adhesion of the flap to the underlying stroma. Continued improvements in laser technology may allow for safer creation of thinner flaps, helping to provide evidence for superior outcomes in thin-flap LASIK, permitting the biomechanical stability of PRK with the visual recovery of LASIK. Custom flap formation that minimizes weak areas susceptible to tearing will be helpful in achieving this difficult balance between corneal and flap integrity.\n\n**Question:** Compared to Thin-flap laser in situ keratomileusis (LASIK) what was the result of Photo-refractive keratectomy (PRK) on Higher-order aberrations during the study?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
216
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A heart failure self-management program for patients of all literacy levels: A randomized, controlled trial [ISRCTN11535170]\n\n ABSTRACT.BACKGROUND:\nSelf-management programs for patients with heart failure can reduce hospitalizations and mortality. However, no programs have analyzed their usefulness for patients with low literacy. We compared the efficacy of a heart failure self-management program designed for patients with low literacy versus usual care.\n\nABSTRACT.METHODS:\nWe performed a 12-month randomized controlled trial. From November 2001 to April 2003, we enrolled participants aged 30–80, who had heart failure and took furosemide. Intervention patients received education on self-care emphasizing daily weight measurement, diuretic dose self-adjustment, and symptom recognition and response. Picture-based educational materials, a digital scale, and scheduled telephone follow-up were provided to reinforce adherence. Control patients received a generic heart failure brochure and usual care. Primary outcomes were combined hospitalization or death, and heart failure-related quality of life.\n\nABSTRACT.RESULTS:\n123 patients (64 control, 59 intervention) participated; 41% had inadequate literacy. Patients in the intervention group had a lower rate of hospitalization or death (crude incidence rate ratio (IRR) = 0.69; CI 0.4, 1.2; adjusted IRR = 0.53; CI 0.32, 0.89). This difference was larger for patients with low literacy (IRR = 0.39; CI 0.16, 0.91) than for higher literacy (IRR = 0.56; CI 0.3, 1.04), but the interaction was not statistically significant. At 12 months, more patients in the intervention group reported monitoring weights daily (79% vs. 29%, p < 0.0001). After adjusting for baseline demographic and treatment differences, we found no difference in heart failure-related quality of life at 12 months (difference = -2; CI -5, +9).\n\nABSTRACT.CONCLUSION:\nA primary care-based heart failure self-management program designed for patients with low literacy reduces the risk of hospitalizations or death.\n\nBODY.BACKGROUND:\nLimited literacy skills are common among adults in the United States [1]. Low literacy is associated with increased risk of hospitalization and worse control of chronic diseases [1-4]. Heart failure is a common chronic illness requiring multiple medications and significant self-care. Heart failure is the leading cause of hospitalization in the Medicare population [5]. The complexity of care for heart failure puts people with low literacy at considerable risk for adverse outcomes including hospitalization, worse quality of life, and mortality. Heart failure disease-management interventions appear effective in reducing rehospitalizations and improving quality of life [6]. Most randomized clinical trials of heart failure disease management completed over the last 10 years have enrolled patients during, or shortly after, hospitalization and reported the outcome of readmission [6]. Although the designs of these programs vary, several have tested education and support to enhance patient self-management as the main component of the intervention [7-10]. The content of self-management education usually includes teaching to understand medications, reduce salt intake, monitor daily weights, and recognize symptoms. Most programs include structured follow-up either by home visit, phone, or mail. Only a few, uncontrolled studies specifically ask patients to self-adjust their diuretics [11,12]. Heart failure self-management programs may be particularly effective for vulnerable populations, such as those with poor literacy [13,14]. However, to our knowledge, no previous studies have explicitly examined the role of self-management programs in a low literacy population. A recently published study and accompanying editorial suggested that such self-management support may be most effective among vulnerable populations [13,14]. Low literacy may represent a vulnerability for which we should design our programs. Disease management for patients with low literacy may require refined approaches to foster self-management skills. We developed a heart failure self-management program for use by patients with a variety of literacy levels [15]. We performed a randomized controlled trial comparing our self-management program to usual care among outpatients to test if the program could reduce hospitalizations and improve heart failure-related quality of life.\n\nBODY.METHODS.STUDY DESIGN:\nWe conducted a randomized controlled trial in the University of North Carolina (UNC) General Internal Medicine Practice, which serves a wide socioeconomic range of patients. The practice, staffed by over 20 attending faculty and 70 medical residents, cares for over 500 patients with heart failure.\n\nBODY.METHODS.STUDY PARTICIPANTS:\nTo be eligible, patients had to have a clinical diagnosis of heart failure confirmed by their primary provider through a direct interview, and one of the following: 1) chest x-ray findings consistent with heart failure, 2) ejection fraction <40% by any method, or 3) a history of peripheral edema. They also had to have New York Heart Association class II-IV symptoms within the last 3 months. Patients were excluded if they had moderate to severe dementia (based on the treating physician's clinical judgment), terminal illness with life expectancy less than 6 months, severe hearing impairment, blindness, current substance abuse, a serum creatinine >4 mg/dl or on dialysis, a requirement of supplemental oxygen at home, lacked a telephone, or were scheduled to undergo cardiac surgery or awaiting heart transplant. We did not exclude patients on the basis of literacy skill because we felt patients of all literacy levels would benefit from this intervention. Patients who read well often prefer information presented in an easy-to-read format [16]. We accepted referrals from the cardiology clinic if patients met eligibility criteria. This study was approved by the Institutional Review Board of the UNC School of Medicine, and all participants gave informed consent prior to enrollment. For participants who could not adequately read the informed consent document, the research assistant read and explained it to them. They were asked to sign a short form indicating that the informed consent document was reviewed and they agreed to participate. When the short form was used, a witness was asked to attest to the adequacy of the consent process.\n\nBODY.METHODS.PROCEDURES:\nParticipants were recruited between November 2001 and April 2003 from the General Internal Medicine and Cardiology Practices at UNC Hospitals. A trained research assistant screened all patients age 30–80 for use of furosemide. If the patient was on furosemide, their physician was queried about the presence of heart failure. If the patient met eligibility criteria and consented to participate, baseline data were collected. We then randomized patients by concealed allocation based on a random number generator to receive the intervention or usual care. All patients were followed for one year. All data were collected in the General Internal Medicine Practice.\n\nBODY.METHODS.INTERVENTION:\nThe intervention was delivered in the General Internal Medicine Practice. The educational materials and disease management intervention were previously described in detail, and the intervention is summarized here [15]. The intervention began with a 1-hour educational session with a clinical pharmacist or health educator during a regular clinic visit. Patients were given an educational booklet designed for low literacy patients (written below the 6th grade level and extensively pre-tested in focus groups and a pilot study [15]) and a digital scale. The educator and patient reviewed the booklet together, including management scenarios. As part of the educational session, patients were taught to identify signs of heart failure exacerbation, perform daily weight assessment, and adjust their diuretic dose. Because this intervention was aimed at patients with low literacy, the health educator used pedagogic strategies felt to improve comprehension for patients with low literacy [17]. For example, the educator had the patient teach back the information [18], engaged the patient in filling out the notebook, and used brainstorming to help the patient incorporate self-management into their lives. The educator, patient, and primary care physician collaborated to establish the patient's \"good weight\" (i.e., where the patient's heart failure was stable) and baseline diuretic dose. The educator then filled in the management plan in the patient's notebook to help the patient better manage weight fluctuations and self-adjust the diuretic dose based on weight (Figure 1). The general plan involved doubling the dosage if weight went up and halving it if weight went down. The program coordinator then made scheduled follow-up phone calls (days 3, 7, 14, 21, 28, 56) and monthly during months 3–6. The follow-up phone calls, each lasting 5–15 minutes, were designed to reinforce the educational session and provide motivation for the patients. Again, the program coordinator had the patient describe their self-management practices and offered feedback to improve them. Patients experiencing worsening symptoms were scheduled acute visits with their physician. We did not provide specialized nursing assessment, care or medication advice beyond diuretic dosing. If the patient's doctor determined that the good weight had changed, the program coordinator would revise the care plan with the patient. Patients enrolled in the control group received a general heart failure education pamphlet written at approximately the 7th grade level, and continued with usual care from their primary physician. The only contacts between the research team and the control patients were at enrollment and data collection.\n\nBODY.METHODS.MEASURES:\nWe assessed outcomes at 6 and 12 months through in-person interviews and review of the medical record. To be sensitive to low literacy, all interviews were conducted verbally by a trained research assistant. If patients were unable to come to clinic for the interview, it was conducted by phone. The research assistant was not blinded to the patient's study group. Primary outcomes were death or all-cause readmission and heart failure-related quality of life at the end of 12 months. Data on hospitalization dates were obtained from the patient and confirmed by medical chart review. All admissions, regardless of hospital or cause, were counted. For exploratory analyses, we classified reason for admission as cardiac or non-cardiac. Cardiac admissions included those primarily for heart failure (e.g., shortness of breath and edema relieved by diuresis) and other cardiac causes such as chest pain, arrhythmias, or syncope. Cause of admission was determined by chart review by one of the authors (D.D.) who was blinded to treatment allocation. Heart failure-related quality of life was assessed using a modified version of the Minnesota Living with Heart Failure Questionnaire (MLHF). The MLHF is a 21 question instrument with a 6-point Likert scale response option and scores ranging from 0 to 105 [19]. In pilot testing of the MLHF, we found that low literacy patients had trouble answering questions with the standard 6-point Likert scale [15], so we changed the response scale to 4 points, using a visual display with stars to represent increasing severity. The 4-point Likert scale was scored as 0, 1, 3, and 5 to approximate standard scores on the MLHF. Secondary measures included heart failure self-efficacy, knowledge, and behaviors. Self-efficacy was measured with an 8 item scale developed for the behaviors needed in this trial as suggested by self-efficacy scale developers [20]. Respondents used a 4-point Likert scale yielding a total score from 0–24. We assessed heart failure knowledge using a knowledge test previously developed for this population [15], Heart failure self-management behavior was assessed by asking patients how often they weighed themselves. We used patient self-report and the medical chart to measure age, gender, race, insurance status, income, years of education, medication use, years with heart failure, and the presence of co-morbidities. We measured literacy using the Short Test of Functional Health Literacy in Adults (S-TOFHLA) [21], a well-validated scale that correlates well with other measures of reading ability [22]. Patients who scored in the inadequate literacy range on the S-TOFHLA were considered to have \"low literacy.\" The cut-point for inadequate literacy is roughly analogous to the 4th grade reading level.\n\nBODY.METHODS.SAMPLE SIZE:\nSample size was based on pilot study results showing a 9-point improvement in MLHF scores over 3-months with the intervention [15]. Detecting a 9-point difference between intervention and control group with 80% power and alpha set at 0.05 required 70 patients per group. We aimed to enroll 150 patients to account for possible attrition, but stopped at 127 because funding ended. We did not power this study to detect differences in hospitalization, but studies with even smaller numbers of patients have shown a difference for that outcome [7].\n\nBODY.METHODS.STATISTICAL METHODS:\nPatients who did not return any phone calls and did not return for follow-up assessment did not have outcome data for analysis. Patients who withdrew from the study were censored at the time of withdrawal; any data collected prior to withdrawal were included in the analysis. Baseline differences between groups were assessed using t-tests for continuous outcomes and chi-squared tests for categorical outcomes. For MLHF, heart failure self-efficacy and heart failure knowledge, we used two-sample t-tests. Non-parametric tests were also performed for all comparisons, but results did not differ from the parametric tests. Because of the small sample size and the unequal distribution of baseline characteristics, we adjusted for baseline differences using linear regression. Analyses of self-reported outcomes, such as MLHF, excluded patients who died or withdrew from the study before 6 or 12 month data was collected. For hospitalization or death, we used negative binomial regression and censored patients at the time of death or withdrawal from the study. Based on the likelihood ratio test, negative binomial regression was a better fit for the data than a Poisson regression. Additionally, the Vuong test confirmed that a zero-inflated model was inappropriate [23]. Because of uneven distribution of baseline characteristics, we performed analysis of covariance with negative binomial regression to control for baseline differences [24]. We identified the following variables that could contribute to hospitalization or death based on previous studies: age, race, gender, literacy level, hypertension, diabetes, creatinine, MLHF score, use of β-blockers, angiotensin converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), use of digoxin, and presence of systolic dysfunction [7,25]. Variables were not included in the model if the relationship between the variable and the study group or outcome had a p value greater than 0.3. We started with a model including the following items to arrive at the best point estimate: age, gender, hypertension, creatinine, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs. We then eliminated variables from the model if p > 0.30, and if the point estimate remained within 10% of the initial estimate. We prespecified a subgroup analysis in patients with low literacy to analyze if the intervention had a similar effect. The same analysis described above was repeated for the subgroups of patients with low literacy and those with higher literacy. The initial multivariate model for the subgroups analysis included: age, gender, hypertension, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs.\n\nBODY.METHODS.ROLE OF THE FUNDING SOURCE:\nThe funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.\n\nBODY.RESULTS.PATIENTS:\nWe screened 919 patients on furosemide between November 2001 and April 2003. 127 met eligibility criteria and agreed to participate (Figure 2). Of those not enrolled, 407 did not have heart failure according to their physician, 367 did not meet eligibility criteria and 27 declined to participate. Of those randomized to the control group, 1 never returned after the first visit, 1 withdrew during the study and 4 died during the study. Follow-up was completed for all of the remaining participants (98%) (Figure 3). Of those randomized to the intervention group, 3 never returned after the first visit, 4 withdrew during the study and 3 died during the study. Follow-up was completed for all of the remaining participants (93%). At baseline, most characteristics were similar between the two groups (Table 1). However, the control group had more participants with hypertension, fewer with diabetes, and fewer men. Of heart failure related characteristics, the control group had more participants with systolic dysfunction, and taking β-blockers. The intervention group had more participants taking ACE inhibitors or ARBs, and digoxin. Regardless of these differences, none were statistically significant. The control group did have statistically significantly higher baseline MLHF scores representing worse symptoms at baseline.\n\nBODY.RESULTS.HOSPITALIZATION OR DEATH:\nThere were 68 hospitalizations (65) or deaths (3) in the intervention group and 111 (107 hospitalizations, 4 deaths) in the control group. The crude all-cause hospital admission or death incidence rate ratio (IRR) was 0.69 (95% CI 0.40, 1.19). After adjusting for age, gender, use of ACE inhibitor or ARB, use of a β-blocker, presence of hypertension, and baseline MLHF, intervention patients were less likely to have the outcome (IRR = 0.53; 95% CI 0.32, 0.89). 61% of patients in the control group had at least one hospitalization or died, and 42% of patients in the intervention group had at least 1 hospitalization or died (p = 0.13).\n\nBODY.RESULTS.CARDIAC HOSPITALIZATION:\n39% of patients in the control group and 34% of patients in the intervention group had at least one hospitalization for cardiac causes (p = 0.55). The unadjusted IRR was 0.79 (95% CI 0.42, 1.5). After adjusting for baseline differences, the IRR was 0.85 (95% CI 0.44, 1.7).\n\nBODY.RESULTS.HEART FAILURE-RELATED QUALITY OF LIFE:\nIn unadjusted analysis, the control group, on average, improved 5 points on the MLHF and the intervention group improved by 1 point. The difference was not statistically significant (3.5 points, 95% CI 11, -4, p = 0.36). After adjusting for baseline differences between the groups, the difference was 2 points (95% CI 9, -5, p = 0.59) suggesting no effect on heart failure-related quality of life.\n\nBODY.RESULTS.OTHER OUTCOMES.KNOWLEDGE:\nHeart failure related knowledge improved more in the intervention group than in the control group. Mean difference in score improvement was 12 percentage points (95% CI 6, 18; p < 0.001).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-EFFICACY:\nHeart failure self-efficacy improved more in the intervention group than in the control group. Mean difference in score improvement was 2 points (95% CI 0.7, 3.1; p = 0.0026).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-CARE BEHAVIOR:\nSignificantly more patients in the intervention group than in the control group reported daily weight measurement at 12 months (79% vs. 29%, p < 0.001).\n\nBODY.RESULTS.SUBGROUP ANALYSES ACCORDING TO LITERACY:\nTwenty-four patients in each group had inadequate literacy based on the S-TOFHLA (Table 2). Among these patients, there was no difference in quality of life score in unadjusted and adjusted analyses (difference = -1.6; 95% CI -15, 12); p = 0.81). For the combined outcome of hospitalization or death, the unadjusted IRR was 0.77 (95% CI 0.30, 1.94). After adjusting for baseline differences, the IRR was 0.39 (95% CI 0.16, 0.91). Seventy-five patients had marginal or adequate literacy based on the S-TOFHLA. We found no difference in quality of life score in unadjusted and adjusted analyses (difference = -4.2; 95% CI -14, 6; p = 0.40). Among patients in the higher literacy group, the unadjusted IRR for hospitalization or death was 0.65 (95% CI 0.33, 1.3). After adjusting for baseline differences, the IRR was 0.56 (95% CI 0.30, 1.04). We did not find a statistically significant effect modification between literacy and the intervention.\n\nBODY.DISCUSSION:\nA heart failure self-management program designed for patients with low literacy reduced the rate of the combined endpoint of hospitalization or death. The prespecified subgroup analyses suggest that patients with low literacy benefited as much from the intervention as the patients with higher literacy. The success of our intervention reflects the goals of our program. We designed an easy-to-read and use educational booklet and self-management plan, and focused on overcoming barriers to learning self-management [15]. Our intervention was founded on teaching self-management. We focused on helping patients understand signs and symptoms of worsening heart failure and perform self-adjustment of diuretics based on weight fluctuation. Many care providers would not attempt to teach patients, particularly those with low literacy, how to self-adjust their diuretic medication. We found that, with careful teaching, many patients incorporated this strategy into their daily routine successfully, as demonstrated by improved self-care behaviors. Teaching self-adjustment of diuretics, rather than the conventional teaching to call the care provider if weight fluctuates, empowers patients to take more control over their illness. Self-adjustment of diuretic dose is a prominent aspect of the self-management training we provided to the intervention patients. Other programs to improve patient self-management have not been explicit in teaching patients to self-adjust their diuretic dose based on weight fluctuation. Although our outcomes are comparable to others', using this approach puts more control into the hands of the patient. Furthermore, our intervention appears effective among patients with low literacy skills, a group often overlooked for empowering interventions. Our study adds to the growing literature on disease management programs for patients with heart failure [6], particularly those that focus on self-management training [7-10]. Studies focusing on self-management training have demonstrated comparable improvements in hospitalization rates to more comprehensive programs that aim to improve the quality of pharmaceutical prescribing, provide home visits, and take place in specialized cardiology clinics [6]. Such comprehensive programs have also been shown to reduce mortality, but self-management programs have not [6]. We did not detect any difference in heart failure related quality of life which was the outcome we powered our study to detect. Other self-management studies that have found improved quality of life have enrolled patients during a heart failure hospitalization [8,9]; however, we enrolled patients in the outpatient setting while they were clinically stable. Improving quality of life for stable outpatients may be more difficult for this type of intervention. We have previously reported that patients with diabetes and low literacy benefited more from a disease management intervention than those with higher literacy skills [26]. A similar result in two different chronic diseases substantiates the claim that appropriately designed disease management programs may have greater effect for low literacy or vulnerable populations, who are most at risk for adverse outcomes with usual care. This finding is particularly important in light of the recent study by DeBusk and colleagues that did not find a reduction in hospitalization with a well-designed comprehensive intervention [13]. The authors and an accompanying editorial [14] suggested that the failure to detect improvement may have occurred because the patients studied were less at-risk than in other studies. They called for more research to determine better ways of targeting disease management. We believe that low literacy is an important marker for vulnerability to adverse outcomes, and that disease management programs targeted to patients with low literacy may be an effective way of focusing resources on those most able to benefit. If patients with low literacy are to be preferentially recruited for such programs, innovative outreach and screening efforts will likely be required, as patients with low literacy may face particular barriers to accessing such care. This study should be interpreted in light of its limitations. Research assistants were not blind to group assignment during the assessment of self-reported outcomes. As such, patients in the intervention may have been more likely to inflate their responses in an effort to please the interviewer. This effect would tend to inflate patient responses to the subjective assessments of heart failure-related quality of life, self-efficacy, and self-care behaviors. The MLHF questionnaire was modified from its original form to make it easier for patients with low literacy to respond. This change in the scale may have changed its ability to detect important changes in heart failure related quality of life. Because the groups' mean scores were almost identical, we do not feel this limitation changed our results. In a similar vein, most questionnaires are not validated in low literacy populations, raising questions as to their ability to perform to the same standards. Our sample size was small, which did not allow for an even distribution of baseline variables among the groups. We controlled for baseline differences between groups in our analysis. While it is controversial whether or not to control for baseline differences in randomized controlled trials, some analysts have argued that doing so improves the power without introducing bias [24]. A larger, multi-site study would offer better control of confounders, better generalizability, and more power to determine differences in effect according to literacy. We did not collect data on the resources needed to implement this type of intervention in usual settings, and such a study and cost-effectiveness analysis would be helpful for most interventions of this type. We used health educators, not nurses or physicians, to deliver the intervention. By using less highly trained individuals to deliver the intervention, we enabled nurses and physicians to perform clinical tasks more commensurate with their training. Other studies that have performed global cost estimates have found that the savings from reductions in hospitalizations exceed the cost of the intervention [6].\n\nBODY.CONCLUSION:\nIn conclusion, our heart failure self-management program, designed for patients of all literacy levels, appears to reduce rates of hospitalization and death. Patients with low literacy, and other vulnerable patients, may stand to benefit most from these programs. Further research into the design, implementation, and dissemination of disease management programs for low literacy patients will be crucial for meeting the health care needs of the growing population of patients with chronic illness.\n\nBODY.COMPETING INTERESTS:\nDrs. DeWalt and Pignone have received honoraria and grants from Pfizer, Inc., Dr. Rothman has received grants from Pfizer, Inc., and Dr. Sueta is in their speakers bureau.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nDD conceived of the study, participated in its design and coordination, performed statistical analyses, interpretation of the data, and drafted the manuscript. RM, MB conceived of the study and participated in its coordination. MK, KC coordinated the study, and collected the data, RR, CS participated in study design and interpretation of the data. MP conceived of the study, participated in its design and coordination, and interpretation of the data. All authors reviewed the manuscript for important intellectual content and gave final approval.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to Standard information about self-care what was the result of Follow-up and thorough education on self-care on Heart failure-related quality of life at the end of treatment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
326
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 160/9 μg on Baseline-adjusted average 12-hour FEV1 at end of treatment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
443
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of high volume saline enemas vs no enema during labour – The N-Ma Randomised Controlled Trial [ISRCTN43153145]\n\n ABSTRACT.BACKGROUND:\nEnemas are used during labour in obstetric settings with the belief that they reduce puerperal and neonatal infections, shorten labour duration, and make delivery cleaner for attending personnel. However, a systematic review of the literature found insufficient evidence to support the use of enemas. The objective of this RCT was to address an identified knowledge gap by determining the effect of routine enemas used during the first stage of labour on puerperal and neonatal infection rates.\n\nABSTRACT.METHODS:\nDesign: RCT (randomised controlled trial; randomized clinical trial). Outcomes: Clinical diagnosis of maternal or neonatal infections, labour duration, delivery types, episiotomy rates, and prescription of antibiotics Setting: Tertiary care referral hospital at the Javeriana University (Bogotá, Colombia) that attended 3170 births during study period with a caesarean section rate of 26%. Participants: 443 women admitted for delivery to the obstetrics service (February 1997 to February 1998) and followed for a month after delivery. Inclusion criteria were women with: low risk pregnancy and expected to remain in Bogotá during follow up; gestational age ≥ 36 weeks; no pelvic or systemic bacterial infection; intact membranes; cervix dilatation ≤7 cm. Intervention: 1 litre saline enema, versus no enema, allocated following a block random allocation sequence and using sealed opaque envelopes.\n\nABSTRACT.RESULTS:\nAllocation provided balanced groups and 86% of the participants were followed up for one month. The overall infection rate for newborns was 21%, and 18% for women. We found no significant differences in puerperal or neonatal infection rates (Puerperal infection: 41/190 [22%] with enema v 26/182 [14%] without enema; RR 0.66 CI 95%: 0.43 to 1.03; neonatal infection 38/191 [20%] with enema v 40/179 [22%] without enema; RR 1.12, 95% CI 95% 0.76 to 1.66), and median labour time was similar between groups (515 min. with enema v 585 min. without enema; P = 0.24). Enemas didn't significantly change episiorraphy dehiscence rates (21/182 [12%] with enema v 32/190 [17%] without enema; P = 0.30).\n\nABSTRACT.CONCLUSION:\nThis RCT found no evidence to support routine use of enemas during labour. Although these results cannot rule out a small clinical effect, it seems unlikely that enemas will improve maternal and neonatal outcomes and provide an overall benefit.\n\nBODY.BACKGROUND:\nEnemas are frequently used in obstetric settings because they are thought to reduce the risk of puerperal and neonatal infections, shorten the duration of labour and make delivery cleaner for attending personnel [1,2]. However, the use of enemas is controversial and there is little evidence of their effectiveness. Enemas are upsetting and humiliating for women in labour and increase the workload in labour wards. Enemas cause watery stools and could theoretically increase contamination and infection rates[3,4]. A systematic review of the literature found one trial with 222 women, which found no difference in puerperal or neonatal infections. However, it only followed women while they remained hospitalized; this time may be too short to identify outcomes that could be affected by enemas. The review concluded that there was insufficient evidence to recommend the use of enemas during labour and called for additional RCTs [5]. The main objective of this RCT was to find out if the use of high volume enemas during the first stage of labour modified neonatal and puerperal infectious rates. The null hypothesis stated that proportion of infections was similar for the intervention and control groups. The secondary objectives were to establish if there was an effect on specific neonatal or puerperal infectious rates, and other clinically relevant outcomes such as neonatal infections, maternal pelvic inflammatory disease or suture dehiscence, and antibiotic prescriptions. The protocol for this study was presented and debated at two meetings of the Latin American Clinical Epidemiology Network: Latinclen I in April 1995 in Colombia; and the Latinclen III in September 1997 in Dominican Republic.\n\nBODY.METHODS:\nThe RCT was conducted at the obstetric service of the Hospital Universitario San Ignacio, a tertiary care referral centre in Bogota, Colombia. Women attending the admission unit of the obstetrics' clinic for delivery were invited to participate. In accordance with the protocol, the examining intern or resident would invite all eligible women to participate in the study. Recruitment rates varied greatly among different interns and residents. Because of the frequent rotation of interns and residents through the admission unit, interns and residents attended standardised monthly training sessions on how to enrol patients on the study. Inclusion criteria included: absence of life threatening events at admission interview (such as placental abruptio, prolapsed cord, or eclampsia); gestational age ≥36 weeks based on the best available estimation such as a reliable last menstrual period date or an appropriate ultrasound; projected permanence in Bogota during the month following delivery; and willingness to participate expressed in a written informed consent. There was no age restriction for participants. However, for women under 18 years of age, an adult witness – most frequently a next of kin, was also asked to participate in the informed consent process. Exclusion criteria included: a clinical diagnosis of any systemic or gynaecological bacterial infection; use of systemic antibiotics during the week prior to admission; rupture of amniotic membranes or uncertainty of their integrity; or a cervical dilatation >7 cm. The trial protocol was approved by the Institutional Review Board of the Medical School of the Javeriana University and by the staff of the Clinical Epidemiology and Biostatistics Unit. Women willing to participate were offered a range of additional services during follow-up. These services were designed to improve outcome detection and adherence to the study. All women received a booklet that addressed questions frequently raised by women during labour and puerperium (30 days following delivery), and informed on early signs for maternal and neonatal infections[6]. The development of the booklet involved semi-structured interviews carried out by a trained nurse who would elicit the issues that worried women during the first day, first week and first month after delivery. The team that designed the booklet included graphic designers and a social communications specialist. The booklet was tested and printed prior to the beginning of the RCT. Participants were offered two programmed health care visits where their concerns could be discussed with a professional nurse. During the visits, the nurse examined the mother and the baby and assessed outcomes. Systematic telephone reminders for these visits were scheduled before the 1st and 4th week after delivery. Women received instructions on how to access a 24-hour paging service which allowed them to contact a health care provider to address any concerns, seek support, inform about problems, or get advice on issues such as emergency medical attention. This allowed retrieving relevant information of visits to other healthcare providers and contacting them to collect data. The local branch of La Leche League International, a volunteer organization delivering support and education on breastfeeding, offered regular free educational sessions for participants. All participants were offered subsidised screening for neonatal hypothyroidism; when this RCT was done, screening for thyroid disease was not mandatory nor covered by health maintenance organisations in Colombia.\n\nBODY.METHODS.ASSIGNMENT AND FOLLOW-UP PROCEDURES:\nRandomisation was done in blocks of 2 (20%), 4 (60%) and 6 (20%) using Ralloc® allocation software. Once the participant had completed the informed consent process, an opaque envelope with sequential numbering and instructions was opened. Women randomised to the intervention group received a 1 litre Travad® 2.5% sodium chloride solution enemas applied by a nurse assistant prior to been taken to the labour wards, where participants in both groups would thereafter receive the same care. During the first and third week after delivery, reminders to attend scheduled appointments at the outpatient primary care clinic were delivered by phone and mail. We used a standardised telephone survey to assess if participants had used any other health services or been diagnosed with any particular condition. Newborns were screened for hypothyroidism during the first scheduled visit when the mother agreed to it. Results for the screening tests were scheduled to be delivered at the second visit, but these were also couriered with an explanatory note if the appointment was missed. The screening scheme covered confirmation diagnostic tests, when necessary. The primary outcomes were the diagnosis of infections in newborns or women during the month following delivery. A neonatal outcome was positive if during the first month of life the child was prescribed systemic antibiotics. A neonatal outcome was also positive when the child was diagnosed with any of the following clinical conditions: ocular infection (purulent drainage in the eye after the sixth day of delivery), umbilical infection (foul smell with periumbilical erythema), skin infection (cellulitis or impetigo), lower or upper respiratory tract infection, intestinal infection, meningitis or sepsis. A puerperal outcome was positive when, during the first month after delivery, a health care provider diagnosed the women with any of the following: dehiscence of the episiorraphy suture, purulent effusion from the episiorraphy, urinary tract infection, pelvic inflammatory disease, or vulvovaginitis. The primary outcome of the study was an aggregated maternal and neonatal infection rate: either the mother or the newborn had an infectious outcome (combined infection rate) . A team member visited participating women and newborns in hospital on a daily basis. Throughout the trial, trained research assistants using standardised questionnaires, registered data from telephone calls, hospitalisations, follow-up visits, and any communications with the participants, their families, or their health care providers.\n\nBODY.METHODS.MASKING:\nMasking the use of enemas was unfeasible. However, we made efforts to conceal the intervention by not separating documents with information on the allocation from those outcome data collection, by training the team's supporting clinical team (professional nurse, family medicine residents, family medicine staff, and consulting dermatologist) to avoid enquiring in ways that would unmask the allocation. Health care providers in other settings, such as physicians at emergency wards, paediatricians and medics at outpatient clinics were unaware of the allocation and frequently of the specific objectives of the study. Except for the intervention, participants received the same health care, and data retrievers would remain unaware of individual allocations. Interventions remained coded for the analysis and the code was broken once the analysis was completed. Input between data at recruitment and allocation was done weeks before the collection of data on outcomes at follow up.\n\nBODY.METHODS.SAMPLE SIZE DETERMINATION:\nWe were unable to find reliable data on the incidence of baseline infection rates for puerperal women or newborns so we did a pilot study to have base data that would allow a good estimate of frequencies for sample size calculations. The pilot study, which included the first 44 participants of the control group, estimated the combined infection rate of puerperal women and newborns at 46 percent [7]. Using 5% significance and a 80% power, a sample of 394 participants distributed in two parallel groups was estimated to be required to detect a relative difference of 25% in combined infection rates of women and newborns, following the formulas provided by Duppont and Plummer[8]. Assuming 4% of the participants would be lost to follow-up, an estimated total sample of 410 women was required.\n\nBODY.METHODS.DATA MANAGEMENT AND ANALYSIS:\nThe database created in Epi-Info v 6.04 b was fed using double data entry and transferred to Stata 5.0© using Stata Transfer 4.0©. A Shapiro-Wilk test was used to determine if the distribution of continuous variables was normal. Non-normal distributions were transformed using a log transformation, and if the distribution was persistently non-normal, a Mann-Whitney test was used to compare groups. Bivariate analyses were done using the chi-square test or Fisher's exact test. Power calculations were done using specialized software developed at the Clinical Epidemiology Unit at the Javeriana University[9].\n\nBODY.RESULTS:\nDuring the twelve months recruitment period (Feb 1997–Feb 1998) 3170 women were admitted for delivery to the obstetric service. The caesarean section rate in the obstetrics service was 26% at the time. Of the 460 women interviewed for recruitment, 16 were non-eligible and 1 declined to participate. We randomised 443 women (see Figure 1), among which we had 12 protocol violations; 4 in the enema group and 8 in the control group. Nevertheless, these women were offered the care and benefits that all other participants had. Protocol violations included admission with ruptured amniotic membranes (5 women in the control group), infection at admission (1 in the control group, 2 in the enema group). Five women didn't fulfil inclusion criteria but were randomised (2 in the control group and 3 in the enema group). The analysis for the remaining women was done by group of allocation. Women who delivered by caesarean section were considered in the analysis. Data were not available to include in an 'intention to treat' analysis the 12 women excluded because of violations to the selection criteria (protocol violations). Follow up was completed by 87% of the participating women and 86% of newborns. Direct examination by a team member at one month follow up was carried out in 20% of women and 19% of newborns (P = 0.51 and P = 0.98 respectively, with similar distribution between groups); standardised telephone interviews with participants and healthcare providers allowed to assess outcomes from the remaining participants. Baseline characteristics were similar in both groups, suggesting that randomisation provided well-balanced and comparable groups (Table 1 [see additional file 2]). Labour duration times and other maternal outcomes were obtained from women's records after delivery and are presented in Table 2 [see Additional file 2]. Neonatal baseline data obtained shortly after delivery from newborns' records are summarised in Table 3 [see Additional file 2]. We found no statistically significant differences between groups for labour duration, delivery types, episiotomy rates, or prescription of antibiotics. Caesarean sections were done in 12% of women with no significant differences in rates between groups. No significant differences were found in the distribution between groups for newborns' \"Ballard\" score, birth weight, diagnosis of neonatal apnoea, or the administration of ocular and umbilical prophylaxis. Five newborns allocated to the control group and none in the treatment group developed respiratory tract infections, but this difference had no statistical significance. Two out of the five newborns who developed lower respiratory tract infections were delivered by caesarean section. The three newborns with omphalitis belonged to the intervention group, but again this difference was not statistically significant. Similarly, no significant differences were found for ophthalmic infection rates, skin infections, intestinal infections or the need for systemic antibiotics (Table 4 [see Additional file 2]). No statistically significant differences were found for any of the assessed outcomes in puerperal women. Pelvic infections affected 4% of women: one was diagnosed with myometrytis; five with endometritis; three had vulvovaginitis; and six had infected episiorraphy sutures. The frequency and severity of perineal tear was similar in the intervention and control group. No significant differences were found in the rates of suture dehiscence among the 372 women who had epysiorraphy (Table 5 [see Additional file 2]). Breast pain complaints were not categorised as an infectious outcome and affected 31% of women, with 79% of them suffering breast engorgement or nipple cracking and no significant difference between groups (P = 0.75). Summarised outcomes are provided in Table 6 [see Additional file 2]. Overall, one in five newborns had an infectious outcome, and rates were not statistically significant between groups. No significant differences were found for aggregated maternal infections, although the study may have been underpowered to rule out such differences; We found an 8% absolute risk difference and a broad, skewed confidence interval (see Table 6 [see Additional file 2]). The aggregated outcome of \"neonatal or puerperal infection\" during the 30-day follow-up was higher in the control group than in the intervention group. However, the 6% difference in absolute risk can be due to chance (P = 0.23). Infections affecting both child and mother in the same family were not significantly different (6/183 [3%] with enema v 5/191 [3%] with no enema; P = 0.39). We planned to assess the effect of enemas' on labour duration using multiple linear regression to adjust for parity. However the normality test of the variable was rejected and a Boxcox transformation (with a range between -2 and 2) did not provide an appropriate model. The residuals analysed through a robust regression had a non-normal distribution, so a non-parametric quintile regression was done, finding no significant differences of labour duration within study groups after adjusting for parity (P = 0.07).\n\nBODY.RESULTS.PARTICIPANT FLOW AND FOLLOW-UP:\nDescribed in Figure 1 [see Additional file 1].\n\nBODY.DISCUSSION:\nPuerperal and neonatal infections, although seldom life threatening, were very frequent in this study. Ophthalmic infections were the most frequent infections amongst newborns. Breast engorgement and nipple cracking were the most frequent maternal complaints during puerperium. Episiorraphy dehiscence was the most frequent infectious outcome in women. We were impressed by how frequent these outcomes were, and it is likely that these problems are being missed in studies with a shorter follow up, such as those that only follow women during hospitalization. It also suggests that the follow up strategy probably had a good sensitivity. High volume enemas used during the first stage of labour did not have a significant effect in the incidence of puerperal or neonatal infections, or labour duration. This RCT found no significant differences in puerperal or neonatal infection rates with enemas. No statistically significant effects were found when analysing women or newborns separately or when their outcomes were aggregated to analyse them as mother-newborn dyads. The RCT had a power in excess of 80% to find differences as big as 25% in the aggregated infection rates for women and newborns. However, the study may have been underpowered to detect differences for individual outcomes. This RCT had an estimated power of 56% to find differences in neonatal infections, and 61% power to detect puerperal infections. A higher rate of operative vaginal deliveries was found in the enema group, although it did not reach statistical significance. It is worth mentioning this, because operative vaginal deliveries may have an effect in puerperal infection rates. The use of an aggregated outcome helped to reduce sample size but it would have been ideal to have a sample size large enough to establish effects in newborns and puerperal women separately. Despite being practical, aggregating results has important limitations: if the maternal and neonatal outcomes have significantly different magnitudes or point out in different directions, aggregation will cancelled out or underestimate the differences. We didn't have resources to collect information to assess if the population of women admitted to the trial represented all eligible women. However, participation in the trial was apparently determined by the commitment of recruiters, not the participants' risk. The RCT did not evaluate women's preferences or known adverse effects of enemas, such as pain, discomfort, embarrassment, or diarrhoea. Since just one-fifth of the participants were personally examined by trained research assistants at the one-month assessment, measurements of these outcomes may have been imprecise and could potentially disguise existing differences, accounting for the lack of differences (risk of Type II error). Nevertheless, significant misrepresentations of outcomes grave enough as to require hospitalisation, dedicated care, or urgent consultations are unlikely with the follow up strategy we endorsed. Overall, participants and their families were helpful and willing to provide information; and we went to great efforts to use more than one information source and verify abnormal results in both women and neonates.\n\nBODY.CONCLUSION:\nPuerperal and neonatal infections had high incidence rates. Severe infections, such as myometritis and omphaylitis, were unusual. In this study enemas didn't significantly modify puerperal or neonatal infection rates. A dramatically larger study would be necessary to determine the effects, if any, of enemas on specific outcomes including life-threatening complications. Enemas cause discomfort, increase workload for health carers and marginally increase the cost of health care. At this time there is no good evidence supporting the routine use of enemas. It seems unlikely that the effect of enemas on the incidence of specific outcomes is large enough to outweigh the inconvenience or adverse effects associated to the routine use of enemas during labour. The data from this study combined from additional RCTs may help better understand the particular effects of enemas, guide policies and elucidate the role of enemas during labour.\n\nBODY.COMPETING INTERESTS:\nThe author(s) declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nLuis Gabriel Cuervo led the conception and design of the RCT, coordinated its implementation and development, data recollection, analysis and interpretation of data, and leads the writing of this manuscript. María del Pilar Bernal trained research assistants, collected and input data, helped with the day to day management, participated in the writing of this article, and has reviewed its different versions. Natalia Mendoza contributed to the development of data capturing templates, trained research assistants, collected and input data, and participated in the original writing of this article and has reviewed its different versions.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: \n\nBODY.SUPPLEMENTARY MATERIAL:\nAdditional File 1Participant flow and follow-up.Click here for file Additional File 2Characteristics of participants at admission. Maternal postpartum outcomes. Neonatal baseline data. Neonatal infectious outcomes. Maternal infectious outcomes. Grouped infectious outcomes.Click here for file\n\n**Question:** Compared to no enema what was the result of 1 litre saline enema on Puerperal and neonatal infection rates ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
341
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Formoterol DPI 9 μg what was the result of Budesonide/formoterol pMDI 320/9 μg on 1-hour post-dose inspiratory capacity mean improvements from baseline to the average over the randomized treatment period?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
403
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Incisional hernia after upper abdominal surgery: a randomised controlled trial of midline versus transverse incision\n\n ABSTRACT.OBJECTIVES:\nTo determine whether a transverse incision is an alternative to a midline incision in terms of incisional hernia incidence, surgical site infection, postoperative pain, hospital stay and cosmetics in cholecystectomy.\n\nABSTRACT.SUMMARY BACKGROUND DATA:\nIncisional hernias after midline incision are commonly underestimated but probably complicate between 2 and 20% of all abdominal wall closures. The midline incision is the preferred incision for surgery of the upper abdomen despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia. A RCT was preformed in the pre-laparoscopic cholecystectomy era the data of which were never published.\n\nABSTRACT.METHODS:\nOne hundred and fifty female patients were randomly allocated to cholecystectomy through midline or transverse incision. Early complications, the duration to discharge and the in-hospital use of analgesics was noted. Patients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications such as fistula, wound dehiscence and incisional hernia after a minimum of 12 months follow-up.\n\nABSTRACT.RESULTS:\nTwo percent (1/60) of patients that had undergone the procedure through a transverse incision presented with an incisional hernia as opposed to 14% (9/63) of patients from the midline incision group (P = 0.017). Transverse incisions were found to be significantly shorter than midline incisions and associated with more pleasing appearance. More patients having undergone a midline incision, reported pain on day one, two and three postoperatively than patients from the transverse group. The use of analgesics did not differ between the two groups.\n\nABSTRACT.CONCLUSIONS:\nIn light of our results a transverse incision should, if possible, be considered as the preferred incision in acute and elective surgery of the upper abdomen when laparoscopic surgery is not an option.\n\nBODY.INTRODUCTION:\nThe rate of incisional hernia after midline incision is commonly underestimated but probably lies between 2 and 20% [1–5]. Thus, incisional hernia is a major postoperative problem. The treatment of incisional hernia is complicated by high rates of recurrences. Recently, in a randomised controlled trial published by Burger et al. [6], midline incisional hernia repair has been shown to be associated with a 10-year cumulative recurrence rate of 63 and 32% for suture and mesh repair, respectively. The midline incision is the preferred incision for surgery of the upper abdomen, despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia [7]. Various approaches to opening the abdomen have been advocated over time. The choice for a certain incision is dependent on the exposure necessary for the desired procedure to succeed. A midline incision, be it supraumbilical, infraumbilical or both, is an approach especially suited for emergency and exploratory surgery because of the quick and generous exposure that can be achieved within a few minutes [8, 9]. The avascular nature of the linea alba minimises blood loss during this procedure. A supraumbilical transverse incision may be utilised in case exposure of the upper abdomen is desired. During this incision, the damage inflicted to the segmental arteries and nerves is previously described as being minimal [10]. Previously, only one randomised controlled trial, comparing transverse and true midline incisions, has been published specifically addressing incisional hernia incidence [11]. To determine whether the use of a transverse incision is an alternative to a midline incision for open cholecystectomy in terms of incisional hernia incidence, surgical site infection, postoperative pain and hospital stay, this randomised controlled trial was performed. This trial was conducted in an era when laparoscopic cholecystectomy was not yet available. The possibility of low incisional hernia rates after transverse incisions and the fact that little is known about potential advantages incited us to publish the relevant results of this randomised controlled trial which has been performed in the past and has only been reported in a Dutch thesis by one of the authors (H.L.). The primary endpoint of this study was the incisional hernia incidence after 12 months of follow-up. Secondary endpoints included pain and cosmetic appearance.\n\nBODY.METHODS.PROTOCOL:\nSome 150 consecutive female patients were randomly assigned to a midline or transverse incision as an approach for elective cholecystectomy or combined cholecystectomy and cholangiography (with or without consecutive choledochotomy) (75 and 75 patients, respectively). Emergency procedures were excluded from participation. The sample size is based on an incisional hernia rate reduction from 20 to 6% at a power of 80% and an error rate of 5%. Obtaining informed consent was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975. The investigation reported was performed with informed consent from all of the patients and followed the guidelines for experimental investigation with human subjects and was approved by the medical ethics committee. An independent statistician prepared closed, tamper-proof envelopes containing the random allocation (Fig. 1). Patients were randomised for one of the procedures in theatre through the opening of the envelopes.Fig. 1Flow chart of patient inclusion and follow-up Patient-related factors that were recorded were age, body mass and length and date of operation. Operation-related factors that were recorded were the exact nature of the operation, length of the incision, the thickness of the subcutaneous fat, surgeon performing the procedure, as well as the duration of the operation (skin-to-skin time). In the immediate postoperative period, the use, dose and type of analgesics was recorded and a pain score was administered. The use of analgesics (morphine 7.5 mg intra-muscular injection, 4 h minimum interval between consecutive injections) was monitored for 48 h after surgery; the pain score was administered for the first 6 days after surgery. In patients assigned to surgery through a midline incision, the skin was incised from just below the xyphoid process to just above the umbilicus. The abdominal wall was opened in the midline by incising the linea alba. A Collin type (two-bladed) self-retaining retractor was used to maintain exposure. The abdominal wall was closed in one layer using single polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). The skin was consequently closed using running monofilament nylon sutures (Ethilon; Ethicon, Amersfoort, The Netherlands). Patients randomised for a transverse incision received a right-sided unilateral transverse incision between 3 and 4 cm below the costal margin. The rectus muscle was incised. The fibres of the external and internal obliques and the transverse muscles were separated in the direction of their course. Exposure was achieved through the use of a manually held single-bladed retractor. Closure of the abdominal wall was achieved by closure of the peritoneum and the posterior rectus fascia using a continuous, polygalactin 910 suture (Vicryl; Ethicon, Amersfoort, The Netherlands). The anterior rectus sheath and the fascia of the internal and external transverses were closed using simple interrupted polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). Towards the end of both procedures, a Redon low-vacuum drain catheter was placed, which was guided outside the abdominal cavity approximately 5 cm from the incision. The skin was consequently closed using continuous monofilament nylon suture (Ethilon; Ethicon, Amersfoort, The Netherlands). All patients received a dose of 5,000 IU of sodium–heparin on the morning of the procedure as thrombosis prophylaxis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Pearson χ2 test was used for comparing percentages. In case of small expected numbers, a Fisher's exact test was performed. Continuous variables were analysed using the Mann–Whitney test. A P-value of 0.05 or less (two-sided) was considered to be statistically significant. Means and medians are expressed ±standard deviation (SD).\n\nBODY.METHODS.FOLLOW-UP:\nPatients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications, such as fistula, wound dehiscence and incisional hernia, after a minimum of 12 months follow-up. The patient and the surgeon evaluated the cosmetic results independently and were asked to rate the scar as unsatisfactory, satisfactory or fine. Furthermore, the length and width of the scar was measured.\n\nBODY.RESULTS.STUDY GROUP:\nSome 150 consecutive patients were randomised for participation in this study during an inclusion period from April 1977 until July 1979. Seventy-five patients received a transverse incision and 75 patients a midline incision (Fig. 1). One patient was withdrawn from further follow-up after developing peritonitis and consequent acute respiratory distress syndrome (ARDS) not related to the closure of the abdominal wall 2 days after surgery (transverse incision group). The patients' average age was 51.9 and 51.4 years for the midline and the transverse incision groups, respectively. Furthermore, no differences were found in the body mass and average length between the two groups (Table 1). A cholecystectomy was performed using a transverse incision in 52 patients and utilising a midline incision in 52 patients also. Fifteen and 16 patients, respectively, underwent a combined cholangiography/cholecystectomy. A further 7 and 6 patients, respectively, were treated with a cholangiography/cholecystectomy plus additional choledochotomy and the postexploratory placement of a T-tube.Table 1Baseline characteristics of the patients undergoing surgery, according to study groupVariableMidline incisionTransverse incisionn = 75n = 74Average age (years) ± SD51.9 ± 14.851.4 ± 13.8Average weight (kg) ± SD71.3 ± 14.568 ± 14.3Average length (cm) ± SD163.5 ± 7.8164 ± 7.3\n\nBODY.RESULTS.SURGEON:\nStaff surgeons performed 17% (13/75 patients) of all procedures performed through a midline incision. The remainder of the procedures through a midline incision was carried out under staff surgeon supervision. Staff surgeons performed 14% of all procedures in the transverse incisions study group (10/74 patients) and supervised the remainder. No statistically significant difference was found between the two randomised groups (P = 0.65).\n\nBODY.RESULTS.DURATION OF SURGERY:\nNo significant difference was noted in the skin-to-skin time (in min) for the two different incisions (Table 2). Surgery utilising midline and transverse incision took 56.9 ± 29.3 and 53.2 ± 26.8 min, respectively (P = 0.35). The total duration of the procedures until extubation (in min) did not differ between the midline and transverse incisions (71.0 ± 30.5 and 67.0 ± 27.3, respectively, P = 0.34).Table 2Length of incision, thickness of subcutaneous fat and skin-to-skin time, according to study groupVariableMidline incisionTransverse incisionP-valueLength of incision (mm) ± SDa164 ± 28140 ± 24<0.0001Thickness of subcutaneous fat (mm) ± SDa34.5 ± 13.030.3 ± 12.40.05Skin-to-skin time (min) ± SDa56.9 ± 29.353.2 ± 26.80.40Width of scar (mm) ± SDb8.3 ± 1.43.3 ± 1.2<0.0001aMeasured during surgery in 75 midline and 74 transverse incisionsbMeasured at follow-up in 63 and 60 midline and transverse incisions, respectively\n\nBODY.RESULTS.PAIN AND ANALGESICS:\nSignificantly more patients, having undergone a midline incision, reported pain on day one, two and three postoperatively (P < 0.0001, Table 3). In the midline incision group, 28/75 patients required no or only one dose of analgesics; the remainder required two or more doses. Thirty-one patients operated through a transverse incision required no analgesics or only one dose; 43 patients (the remainder) required two or more. No significant difference in the use of analgesics was found between the groups (P = 0.69).Table 3Postoperatively reported pain, according to study group, shown as the number of patients reporting pain at the time points indicated (percentage), with the remainder of patients reporting no painTime point after surgeryMidline incision n = 75Transverse incision n = 74P-valuePatients reporting pain, n (%)Patients reporting pain, n (%)3–4 h68 (91)60 (81)0.09First day64 (85)39 (53)<0.0001Second day57 (76)23 (31)<0.0001Third day28 (37)9 (12)<0.0001Fourth day5 (7)3 (4)0.72Fifth day0 (0)1 (1)0.50Sixth day0 (0)1 (1)0.50\n\nBODY.RESULTS.COMPLICATIONS:\nPostoperative complications (Table 4) were seen in 16 out of 75 patients (21%) from the midline incision group and in 15% from the transverse incision group (11 patients) (P = 0.30). Briefly, one patient in each group developed cardiac complications; 8 and 6 patients developed urinary retention after the midline and transverse incisions, respectively (P = 0.59). Surgical site infections were diagnosed in 7 and 3 patients, respectively (P = 0.33).Table 4Rate of complications after surgery, according to study group, shown as the number of patients diagnosed with complications (percentage)ComplicationMidline incisionTransverse incisionP-valuen = 75 n (%)n = 75 n (%)Cardiac1 (1)1 (1)1Urinary retention8 (12)6 (8)0.59ARDS01 (1)0.50Surgical site infection7 (9)3 (4)0.33Haemorrhage1 (1)00.50Pneumonia01 (1)0.50Total17 (23)12 (16)0.30\n\nBODY.RESULTS.DISCHARGE:\nForty-five (60%) and 42 (57%) patients from the patients having undergone a midline or a transverse incision, respectively, were discharged on day 6 or 7 postoperatively. The remaining patients from each group left hospital care on day 8 or later. The duration of hospital admission did not differ between the two types of incision (P = 0.74).\n\nBODY.RESULTS.COSMETICS:\nThe width and length of all incisions was measured during the follow-up visit (Table 2). The mean width of the scar after the healing of the midline incisions was found to be 8.3 ± 1.4 mm. The mean width of the scar after the healing of the transverse incisions was measured to be 3.3 ± 1.2 mm. This observed difference is significant (P < 0.0001). The length of the incisions was 140 ± 24 mm and 164 ± 28 mm for the transverse and the midline incisions, respectively. The difference in scar length was found to be significant (P < 0.0001).\n\nBODY.FOLLOW-UP:\nEighty-one percent of all patients operated through a transverse incision were seen during the follow-up examination (n = 60). Of the patients operated through a midline incision, 63 out of 75 were seen at the outpatient clinic (84%). The patients that were lost to follow-up could either not be traced or had deceased (Fig. 1). The minimum follow-up for the evaluation of cosmetic results and hernia incidence was 12 months and the maximum was 36 months.\n\nBODY.FOLLOW-UP.INCISIONAL HERNIA:\nFrom the patients that had undergone the procedure through a transverse incision, one (1/60; 2%) presented with an incisional hernia as opposed to 9 patients from the midline incision group (9/63; 14%); 95% confidence interval (CI) 7.5–25.4%. This difference in hernia incidence is significant (P = 0.017). No significant correlation was found between the incisional hernia rate and surgical site infection (P = 0.07).\n\nBODY.FOLLOW-UP.SUBJECTIVE COSMETICS:\nPatients and surgeons alike were asked to rate the appearance of the scar during the postoperative follow-up outpatient clinic visit. Both the surgeons and the patients found the scar resulting from the transverse incision to be more cosmetically pleasing (P < 0.0001 and P = 0.03, respectively, Table 5).Table 5Number of patients and surgeons rating the cosmetics of a scar at follow-upScoreMidline incision (n = 63)Transverse incision (n = 60)Patients, n (%)Surgeons, n (%)Patients, n (%)Surgeons, n (%)Unsatisfactory6 (10)25 (40)2 (3)6 (10)Satisfactory16 (25)27 (43)9 (15)12 (20)Fine41 (65)11 (17)49 (82)42 (70)Total63636060Difference between type of incision: patients P = 0.03; surgeons P < 0.0001\n\nBODY.DISCUSSION:\nThis prospective randomised study of transverse and midline incisions for open cholecystectomy shows that a significant reduction of incisional hernia incidence can be achieved through the use of a transverse incision. Only one other study (published in 1980) reported the incidence of incisional hernia after upper abdominal midline and unilateral transverse incision in a randomised trial. No difference between the two techniques (8 and 6% incisional hernia, respectively) was found, but the relatively short follow-up of 6 months, however, may be held accountable for this finding [11]. Three retrospective studies showed rates of incisional hernia of 3.2, 5.4 and 16.5% for midline incision and 1.3, 6.7 and 13.4% for transverse incision without statistically significant differences [12–14]. The possible reason for the rather high incidence of incisional hernia in the midline incision group (14%) may lie in the use of resorbable 910 polygalactin sutures. Nevertheless, the use of the same type of resorbable suture in the closure of the transverse incisions resulted in a 2% hernia rate. There is evidence for the importance of proper technique and choice of incision as a means to reduce incisional hernia being more important than the use of suture material [7]. Furthermore, as mentioned above, it is known that the incidence of incisional hernia in the case of a midline incision lies between 2 and 20%. From our data, the NNT (numbers needed to treat) is calculated to be 8 (95% CI 5–30) and the RRR (relative risk reduction) is 88% (95% CI 23–100%). Luijendijk et al. [15] have published a hernia rate of 2% after Pfannenstiel incisions closed using 910 polygalactin, which is in agreement with our findings in the patients randomised for a transverse incision, emphasising the importance of the incision over the choice of suture material. In our study, significantly fewer patients reported pain on day 1, 2 and 3 after transverse incisions, a result that was also described by other authors [16, 17]. Greenall et al. [18] published a contradictory report (in 1980) in which no significant difference in postoperative pain was found between midline and transverse incisions. The previously mentioned study, however, only analysed 46 out of 572 patients (8%) with regard to pain, which may explain the finding. In the same way, Lacy et al. suspended visual analogue pain scoring in a study comparing midline and transverse incisions for abdominal aortic surgery. Remarkably, the two groups in our study did not differ in terms of postoperative analgesia, a finding that is also reported by Lindgren et al. [17] and Donati et al. [19]. In our study, surgeons as well as patients were significantly more satisfied with the aesthetic appearance after a transverse in comparison with a midline incision. The scars after transverse incisions were found to be significantly shorter and less wide than the midline incisions, which may account for the observed difference. A possible reason for this is that a transverse incision is executed parallel to the prevailing direction of the skin lines on the abdomen and, therefore, the tension on the wound and consequent scar is low. Cholecystectomy has come a long way since this trial. The introduction and widespread acceptance of laparoscopic technique as the treatment of choice has rendered open cholecystectomy to be an operation for exceptional, and perhaps surgically difficult, circumstances. Nowadays, the study reported is hardly feasible, yet, the results are still applicable and very relevant for other surgical procedures in the (upper) abdomen. Knowledge of the favourable results of a transverse incision may aid surgeons in their choice when finding themselves in the unfortunate position of needing conversion to open cholecystectomy. In conclusion, this investigation on transverse incisions might be helpful in reducing the incidence of incisional hernia in patients after open cholecystectomy. The midline incision is a preferred manner to achieve exposure of the abdominal cavity and is considered to be easily performed and quick. Although the midline incision is generally accepted, the incidence of incisional hernias is surprisingly high [1–5]. The choice for a particular incision should not only be based on exposure, but also on hernia incidence reduction, especially since recurrence rates after hernia repair are reported to be very high. Furthermore, the recurrence rate after incisional hernia repair is a disappointing 63 and 32% for suture and mesh repair, respectively [6]. In the light of these results, incisional hernia prevention is warranted. In this investigation, it is shown that a significant reduction (from 14.5 to 1.7%) of incisional hernia incidence was achieved by using a transverse incision. Hence, a transverse incision should be considered as the preferred incision in acute and elective surgery of the upper abdomen in which laparoscopic surgery is not an option. Full exposure of two quadrants is feasible through the use of a unilateral transverse incision in, for example, biliary, bariatric, liver and colonic surgery. The transverse incision should be part of the abdominal surgeon's armamentarium and is a preferable incision to prevent the high incidence of incisional hernia after abdominal surgery.\n\n**Question:** Compared to Midline incision what was the result of Transverse incision on Infection of the surgical site?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
347
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 160/9 μg on Amount of exacerbations per patient-treatment year?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
231
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A prospective, contralateral comparison of photorefractive keratectomy (PRK) versus thin-flap LASIK: assessment of visual function\n\n ABSTRACT.PURPOSE:\nTo compare differences in visual acuity, contrast sensitivity, complications, and higher-order ocular aberrations (HOAs) in eyes with stable myopia undergoing either photo-refractive keratectomy (PRK) or thin-flap laser in situ keratomileusis (LASIK) (intended flap thickness of 90 μm) using the VISX Star S4 CustomVue excimer laser and the IntraLase FS60 femtosecond laser at 1, 3, and 6 months postoperatively.\n\nABSTRACT.METHODS:\nIn this prospective, masked, and randomized pilot study, refractive surgery was performed contralaterally on 52 eyes: 26 with PRK and 26 with thin-flap LASIK. Primary outcome measures were uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), contrast sensitivity, and complications.\n\nABSTRACT.RESULTS:\nAt 6 months, mean values for UDVA (logMAR) were −0.043 ± 0.668 and −0.061 ± 0.099 in the PRK and thin-flap LASIK groups, respectively (n = 25, P = 0.466). UDVA of 20/20 or better was achieved in 96% of eyes undergoing PRK and 92% of eyes undergoing thin-flap LASIK, whereas 20/15 vision or better was achieved in 73% of eyes undergoing PRK and 72% of eyes undergoing thin-flap LASIK (P > 0.600). Significant differences were not found between treatment groups in contrast sensitivity (P ≥ 0.156) or CDVA (P = 0.800) at postoperative 6 months. Types of complications differed between groups, notably 35% of eyes in the thin-flap LASIK group experiencing complications, including microstriae and 2 flap tears.\n\nABSTRACT.CONCLUSION:\nUnder well-controlled surgical conditions, PRK and thin-flap LASIK refractive surgeries achieve similar results in visual acuity, contrast sensitivity, and induction of HOAs, with differences in experienced complications.\n\nBODY.INTRODUCTION:\nRefractive surgery is one of the most commonly performed elective procedures and will likely maintain its popularity as ablation techniques become more refined and understanding of corneal wound healing improves. Two of the most common methods of refractive surgery are photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK). The rapid improvement in vision and lack of postoperative pain associated with LASIK has made this the preferred option with patients compared with PRK, which has greater postoperative discomfort and prolonged recovery of visual acuity.1 Recently, there has been renewed interest in PRK because of increasing concerns of complications associated with LASIK flap creation, including dry eye, corneal ectasia, and flap tears.2–5 Thin-flap LASIK attempts to gain benefits of both techniques by creating a flap of between 80 and 90 μm.6–8 Use of a thinner flap results in a more biomechanically stable cornea and decreases incidence of ectasia given the thicker residual stroma.3,9 Cutting a thinner LASIK flap is less invasive to the nerves within the corneal stroma, decreasing the severity and duration of dry eye, possibly by preserving corneal sensation and blinking rate.10–14 Flap creation avoids corneal epithelium removal, allowing reduced healing time and less haze and scarring.15 The present contralateral study compares the outcomes of eyes that have undergone PRK or thin-flap LASIK using the VISX STAR S4 excimer laser (VISX Incorporated, Santa Clara, CA), with flaps created with intended thicknesses of 90 μm using the IntraLase FS60 femtosecond laser (Abbott Medical Optics [AMO], Santa Ana, CA).\n\nBODY.METHODS:\nData from myopic eyes were analyzed, with or without astigmatism, in which the dominant eye was randomized (Research Randomizer software – Urbaniak, www.randomizer.org) to PRK or thin-flap LASIK (90 μm flap) and the nondominant eye underwent the alternative treatment. All PRK and thin-flap LASIK treatments were performed using the VISX Star S4 CustomVue laser at the John A. Moran Eye Center, Salt Lake City, Utah, between February 2008 and July 2009. All surgeries were overseen by two surgeons (M.M., M.D.M.). The research protocol was approved by the University of Utah Hospital Institutional Review Board. All patients included in this study met the US Food and Drug Administration guidelines for VISX CustomVue LASIK. Mean age of patient, 13 men and 13 women, was 30.8 years (range: 23–46). Twenty-six patients (52 eyes) with stable myopia (1.5–8.5 diopters [D]) and astigmatism (0.242–3.11 D) were enrolled in the study. Eleven patients excluded from this study had clinically significant lens opacities, previous corneal or intraocular surgery, keratoconus, unstable refraction, autoimmune disease, immunosuppressive therapy, or were pregnant or breastfeeding. Correction was made for distance and patients desiring monovision correction were excluded. Contact lenses were discontinued 2 weeks prior to screening for soft contact lens wearers and 6 weeks prior to screening for rigid gas permeable lens wearers. All patients had a preoperative examination including assessment of uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), tonometry, slitlamp examination of the anterior segment, and dilated fundus examination. Manifest and cycloplegic refractions were repeated on 2 separate visits to ensure reliability and stability. Corneal topography and thickness were measured using the Orbscan II v.3.0 (Bausch and Lomb, Rochester, NY). All eyes received 5 preoperative wavefront analyses with the VISX CustomVue WaveScan aberrometer v.3.62 (Fourier) (AMO), without pharmacologic intervention, under mesopic conditions, with a minimum pupil diameter of 6.0 mm. The contralateral study design was made so that each eye could act as a control for the fellow eye in each patient, allowing for study groups to be well matched. There were no violations in the randomization; all patients were analyzed as originally assigned. The randomization protocol was generated before the trial and known only to the study coordinator. In all patients, the emmetropic correction target was based on manifest refraction and wavefront analysis. All flaps were created with the IntraLase FS60 femtosecond laser at 60 kHz in a raster pattern with bed energy of 1.15 μJ, side-cut energy of 2.00 μJ, and pocket enabled. The flaps were created with an intended thickness of 90 μm, diameter of 8.4 to 9.0 mm, superior hinge angle of 55°, and a side-cut angle of 70°. Intraoperative pachymetry or optical coherence tomography were not performed to evaluate actual flap thicknesses. If the 8.0 mm maximum intended ablation diameter exceeded the flap diameter, the hinge and flap were shielded during ablation. Postoperatively, each eye undergoing thin-flap LASIK received 1 drop of gatifloxacin 0.3% (Zymar; Allergan Inc, Irvine, CA), prednisolone acetate 1% (Pred Forte, Allergan Inc), ketorolac tromethamine 0.4% (Acular LS, Allergan Inc.), and a bandage soft contact lens (Softlens Plano T, Bausch and Lomb, Rochester, NY). The prednisolone acetate was continued hourly during the first preoperative day and 4 times daily for an additional 6 days. The gatifloxacin was continued 4 times daily for 1 week. In eyes undergoing PRK all eyes had their corneas cooled with 15 mL of BSS (2.8–3.9°C) immediately following ablation. This was followed by 1 drop of a gatifloxacin 0.3% (Zymar), prednisolone acetate 1% (Pred Forte), ketorolac tromethamine 0.4% (Acular LS) and a bandage soft contact lens (Softlens Plano T). Ketorolac tromethamine was continued 4 times a day for 3 days and then discontinued. Gatifloxacin and prednisolone acetate were continued 4 times a day for 1 week with a subsequent steroid taper over 2 to 3 months per surgeon preference. Mitomycin C was not administered to any patient in the study at any time. Both bandage soft contact lenses were removed simultaneously once re-epithelialization was complete, typically on postoperative days 3 to 5. Patients were seen 1 day, 1 week, 1 month ± 10 days, 3 months ±14 days, and 6 months ±14 days. At all follow-up examinations, UDVA and CDVA were tested using a standard Snellen eye chart. Visual acuity was recorded in both Snellen notation and logarithm of the minimum angle of resolution (logMAR) format. Contrast sensitivity was measured in controlled mesopic conditions at 3, 6, 12, and 18 cycles per degree (cpd) using the Vectorvision CSV-1000E chart (Vectorvision, Greenville, OH). Higher-order aberrations (HOAs), including coma Z(3,1), trefoil Z(3,3), and spherical aberration Z(4,0), were measured using the CustomVue WaveScan at a mean diameter of 6 mm. Undilated scans of both eyes were taken preoperatively and 1, 3, and 6 months postoperatively. Primary outcome measures were UDVA, CDVA, contrast sensitivity, and complications. HOAs were measured and trended within groups as secondary measures. After the study was completed, the results were compiled and the data unmasked for statistical analysis. Refractive error, visual acuity, and HOAs were treated as continuous variables and analyzed for significance by independent t-tests. In all tests, P values <0.05 were considered statistically significant. Data analysis was done using Microsoft Excel (Microsoft Corp, Redmond, WA).\n\nBODY.RESULTS:\nMean preoperative measurements of UDVA, CDVA, sphere, and cylinder are shown in Table 1. 25 of 26 patients (50 eyes) completed the study at postoperative 6 months. One eye in the thin-flap LASIK group required PRK retreatment following a flap tear and both eyes from this patient were therefore removed from analysis of visual acuity, contrast sensitivity, and HOAs as the retreatment prevented the ability to distinguish results between the 2 surgical methods. The eyes from this patient were still included in the analysis of complications.\n\nBODY.RESULTS.VISUAL ACUITY:\nTable 2 shows visual acuity outcomes at 1, 3, and 6 months postoperatively. Statistically significant differences were found between PRK and thin-flap LASIK in UDVA at 1 month postoperatively, with thin-flap LASIK eyes showing more improvement in UDVA. Visual acuities were not statistically different between the groups at 3 or 6 months.\n\nBODY.RESULTS.STABILITY, EFFICACY, AND PREDICTABILITY:\nTable 3 shows stability, efficacy, and predictability outcomes postoperatively at 1, 3, and 6 months. CDVA was statistically different between groups at 1 month, with 24% of the PRK group losing a line or more from preoperative values, while 9% of eyes in the thin-flap LASIK group lost only 1 line at 1 month. No eyes in the thin-flap LASIK group lost more than 1 line. Also, 39% of eyes in the thin-flap group gained a line by 1 month compared with only 12% of eyes in the PRK group. At 6 months 64% and 56% of eyes had gained a line or more of CDVA in the PRK and thin-flap LASIK groups, respectively (P = 0.462).\n\nBODY.RESULTS.CONTRAST SENSITIVITY:\nContrast sensitivity measurements at 3, 6, 12, and 18 cycles per degree (cpd) in each group are shown in Figure 1. There were no differences between groups at any cpd at any time in the study (P ≥ 0.156). The thin-flap LASIK group showed no change in contrast sensitivity postoperatively (P > 0.131), while patients in the PRK group had a slight decrease in contrast sensitivity at 1 month seen at 3 and 12 cpd (P = 0.004) and (P = 0.025), respectively. At 6 months contrast sensitivity in the PRK group was still significantly decreased from baseline at 3 cpd (P = 0.013), although it did not reach a statistically significant difference at 3 months (P = 0.101).\n\nBODY.RESULTS.COMPLICATIONS:\nTypes of complications differed between the 2 groups. In the PRK group, 2 cases of epithelial defects occurred by 1 week, but had completely resolved by 6 months. Three eyes in the PRK group had mild haze appearing as early as 1 week postoperatively. Haze remained in only 1 eye at 6 months, but was classified as minimal and had no effect on UDVA or CDVA. Nine eyes (35%) in the thin-flap LASIK group experienced complications. In the thin-flap LASIK group, flap debris (1 eye), diffuse lamellar keratitis (DLK, 1 eye), and an epithelial cyst at the edge of 1 flap were observed, with no loss of UDVA or CDVA, and all resolved by 6 months. Microstriae were observed in 6 eyes, one of which was the eye described above with flap debris and the other was the eye with DLK, with no associated loss of UDVA or CDVA, with epithelial proliferation noted as filling the microstria and making them less apparent. Two eyes in the thin-flap LASIK group experienced flap tears intraoperatively – one resulting in mild flap edge scarring by 6 months that had no significant effect on visual function, and the other case affecting vision at 1 month postoperatively which was retreated with PRK at 3 months. As a result of the retreatment with the counter surgical technique, the ability to accurately compare visual acuity, contrast sensitivity, and HOAs between the 2 surgical methods was limited and both eyes from this patient were removed from analysis of these measures, but were still included in the analysis of complications.\n\nBODY.RESULTS.HIGHER-ORDER ABERRATIONS:\nAt postoperative 1, 3, and 6 months, 24 (96%), 25 (100%), and 24 (96%) eyes, respectively, in each group completed CustomVue WaveScan analysis. Total root-mean square (RMS) HOAs, coma, trefoil, and spherical aberrations are compared in Figure 2. There were no significant differences between groups in any HOAs throughout the study (P ≥ 0.101), with all P values at 6 months ≥0.63. In both groups, total HOAs (P < 0.008), spherical (P < 0.002), and coma (P = 0.008 at 3 months; P = 0.024 at 6 months) aberrations were significantly increased compared with preoperative conditions. Trefoil showed no significant change throughout the study in either group (P = 0.298).\n\nBODY.DISCUSSION/CONCLUSION:\nThe present study confirms that PRK and thin-flap LASIK are effective surgeries for the correction of low to moderate myopia. Although thin-flap LASIK showed superior visual results in the early postoperative period there was no statistically significant difference in outcomes of UDVA, CDVA, contrast sensitivity, or total RMS HOAs between PRK and thin-flap LASIK by 6 months. In a similar study comparing PRK and thin-flap LASIK, Slade et al also found that UDVA results were better in the thin-flap group early on and equalized by 6 months.16 Our study showed a similar trend, with no significant differences in any of the primary outcomes at 6 months, and with no difference in UDVA at 3 months. Visual regression in our study was similar to outcomes in Slade's study in which 42% of the PRK group lost a line or more of CDVA and 22% of the thin-flap LASIK group lost 1 line at 1 month postoperatively. Despite the use of custom ablation, postoperative increases in total HOAs, sphere, and coma were noted in our study, as also seen by Slade et al, although they noted that the increase in sphere and coma aberrations was significantly higher in the PRK group at 1 and 3 months postoperatively. As found in previous studies, there was no significant change found in trefoil at any time postoperatively.17,18 Our study showed no difference in induction of HOAs between groups at any time. Although increases in HOAs after refractive surgery have been correlated with decreases in contrast sensitivity in other studies, we demonstrate that increases in total RMS, sphere, and coma were seen postoperatively in both groups without a reliable decrease in contrast sensitivity.19,20 Slade's group found that contrast sensitivity was better in the thin-flap group at all postoperative points in the study, which may have been related to their finding of lower induction of sphere and coma aberrations in the thin-flap group compared with the PRK group. The authors recognize that the Slade study had a larger population size (n = 50 per group) and would have increased power to detect significant differences. Our study would have had increased power of analysis with a similar study group size, but results from analysis of HOAs would not likely change as P values for all HOAs at 6 months were ≥0.63. It would be difficult to make any such correlation between contrast sensitivity and HOAs from the results of this study. A loss of CDVA has been associated with the development of corneal haze in other studies, but as mentioned above none of the patients with visual regression developed haze.21–23 Findings in other studies showing that the biomechanics of eyes that have received thin-flap LASIK treatment are indistinguishable from those of PRK have led to suggestions that thin-flap LASIK is the best approach to LASIK.16 Although the present study did not find any statistically significant differences between thin-flap LASIK and PRK in terms of visual quality at 6 months, complications dealing with flap integrity in the thin-flap LASIK group were present which are not complications found in PRK. Although PRK remains a viable option for those unable to undergo LASIK, the use of thinner flaps may eliminate some of the complications seen with traditional LASIK. Larger studies are needed to better compare the complication rates of both methods and to determine how effective thin-flap LASIK will be in achieving the benefits of PRK and LASIK while avoiding the risks associated with each method. While thinner LASIK flaps attempt to preserve the biomechanical stability of the corneal stroma, at the same time, the flap itself becomes less stable, as was noted with the 2 flap tears and other complications occurring in the thin-flap LASIK group in this study. A study by Espandar and Meyer24 showed that most complications in flaps created by IntraLase femtosecond laser occurred at the hinge, which is where the 2 flap tears that occurred in this study. A thinner flap hinge would be biomechanically less stable and would increase the likelihood of intraoperative flap tear occurrence as well. Six of the 9 eyes with complications in the thin-flap LASIK group had microstriae, which are caused by the flattening of a weak corneal flap unable to maintain its curvature over the small area of stroma removed during ablation. The biomechanics of the flap and hinge, however, cannot be evaluated by the design of this study as analysis was done based on intended flap thickness, which has been shown to vary with the IntraLase FS60 femtosecond laser.25 The study could have been strengthened had intraoperative pachymetry or OCT been performed. Creating a flap with increased integrity would help prevent microstriae from forming and would also provide for a stronger hinge that would be less susceptible to flap tear. Possible ways to optimize flap integrity include modification of hinge and side-cut angle creation, as well as improved planarity and microdisruption of flap edges. This will allow improved adhesion of the flap to the underlying stroma. Continued improvements in laser technology may allow for safer creation of thinner flaps, helping to provide evidence for superior outcomes in thin-flap LASIK, permitting the biomechanical stability of PRK with the visual recovery of LASIK. Custom flap formation that minimizes weak areas susceptible to tearing will be helpful in achieving this difficult balance between corneal and flap integrity.\n\n**Question:** Compared to Thin-flap laser in situ keratomileusis (LASIK) what was the result of Photo-refractive keratectomy (PRK) on Uncorrected distance visual acuity at 1 month?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
389
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Incisional hernia after upper abdominal surgery: a randomised controlled trial of midline versus transverse incision\n\n ABSTRACT.OBJECTIVES:\nTo determine whether a transverse incision is an alternative to a midline incision in terms of incisional hernia incidence, surgical site infection, postoperative pain, hospital stay and cosmetics in cholecystectomy.\n\nABSTRACT.SUMMARY BACKGROUND DATA:\nIncisional hernias after midline incision are commonly underestimated but probably complicate between 2 and 20% of all abdominal wall closures. The midline incision is the preferred incision for surgery of the upper abdomen despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia. A RCT was preformed in the pre-laparoscopic cholecystectomy era the data of which were never published.\n\nABSTRACT.METHODS:\nOne hundred and fifty female patients were randomly allocated to cholecystectomy through midline or transverse incision. Early complications, the duration to discharge and the in-hospital use of analgesics was noted. Patients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications such as fistula, wound dehiscence and incisional hernia after a minimum of 12 months follow-up.\n\nABSTRACT.RESULTS:\nTwo percent (1/60) of patients that had undergone the procedure through a transverse incision presented with an incisional hernia as opposed to 14% (9/63) of patients from the midline incision group (P = 0.017). Transverse incisions were found to be significantly shorter than midline incisions and associated with more pleasing appearance. More patients having undergone a midline incision, reported pain on day one, two and three postoperatively than patients from the transverse group. The use of analgesics did not differ between the two groups.\n\nABSTRACT.CONCLUSIONS:\nIn light of our results a transverse incision should, if possible, be considered as the preferred incision in acute and elective surgery of the upper abdomen when laparoscopic surgery is not an option.\n\nBODY.INTRODUCTION:\nThe rate of incisional hernia after midline incision is commonly underestimated but probably lies between 2 and 20% [1–5]. Thus, incisional hernia is a major postoperative problem. The treatment of incisional hernia is complicated by high rates of recurrences. Recently, in a randomised controlled trial published by Burger et al. [6], midline incisional hernia repair has been shown to be associated with a 10-year cumulative recurrence rate of 63 and 32% for suture and mesh repair, respectively. The midline incision is the preferred incision for surgery of the upper abdomen, despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia [7]. Various approaches to opening the abdomen have been advocated over time. The choice for a certain incision is dependent on the exposure necessary for the desired procedure to succeed. A midline incision, be it supraumbilical, infraumbilical or both, is an approach especially suited for emergency and exploratory surgery because of the quick and generous exposure that can be achieved within a few minutes [8, 9]. The avascular nature of the linea alba minimises blood loss during this procedure. A supraumbilical transverse incision may be utilised in case exposure of the upper abdomen is desired. During this incision, the damage inflicted to the segmental arteries and nerves is previously described as being minimal [10]. Previously, only one randomised controlled trial, comparing transverse and true midline incisions, has been published specifically addressing incisional hernia incidence [11]. To determine whether the use of a transverse incision is an alternative to a midline incision for open cholecystectomy in terms of incisional hernia incidence, surgical site infection, postoperative pain and hospital stay, this randomised controlled trial was performed. This trial was conducted in an era when laparoscopic cholecystectomy was not yet available. The possibility of low incisional hernia rates after transverse incisions and the fact that little is known about potential advantages incited us to publish the relevant results of this randomised controlled trial which has been performed in the past and has only been reported in a Dutch thesis by one of the authors (H.L.). The primary endpoint of this study was the incisional hernia incidence after 12 months of follow-up. Secondary endpoints included pain and cosmetic appearance.\n\nBODY.METHODS.PROTOCOL:\nSome 150 consecutive female patients were randomly assigned to a midline or transverse incision as an approach for elective cholecystectomy or combined cholecystectomy and cholangiography (with or without consecutive choledochotomy) (75 and 75 patients, respectively). Emergency procedures were excluded from participation. The sample size is based on an incisional hernia rate reduction from 20 to 6% at a power of 80% and an error rate of 5%. Obtaining informed consent was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975. The investigation reported was performed with informed consent from all of the patients and followed the guidelines for experimental investigation with human subjects and was approved by the medical ethics committee. An independent statistician prepared closed, tamper-proof envelopes containing the random allocation (Fig. 1). Patients were randomised for one of the procedures in theatre through the opening of the envelopes.Fig. 1Flow chart of patient inclusion and follow-up Patient-related factors that were recorded were age, body mass and length and date of operation. Operation-related factors that were recorded were the exact nature of the operation, length of the incision, the thickness of the subcutaneous fat, surgeon performing the procedure, as well as the duration of the operation (skin-to-skin time). In the immediate postoperative period, the use, dose and type of analgesics was recorded and a pain score was administered. The use of analgesics (morphine 7.5 mg intra-muscular injection, 4 h minimum interval between consecutive injections) was monitored for 48 h after surgery; the pain score was administered for the first 6 days after surgery. In patients assigned to surgery through a midline incision, the skin was incised from just below the xyphoid process to just above the umbilicus. The abdominal wall was opened in the midline by incising the linea alba. A Collin type (two-bladed) self-retaining retractor was used to maintain exposure. The abdominal wall was closed in one layer using single polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). The skin was consequently closed using running monofilament nylon sutures (Ethilon; Ethicon, Amersfoort, The Netherlands). Patients randomised for a transverse incision received a right-sided unilateral transverse incision between 3 and 4 cm below the costal margin. The rectus muscle was incised. The fibres of the external and internal obliques and the transverse muscles were separated in the direction of their course. Exposure was achieved through the use of a manually held single-bladed retractor. Closure of the abdominal wall was achieved by closure of the peritoneum and the posterior rectus fascia using a continuous, polygalactin 910 suture (Vicryl; Ethicon, Amersfoort, The Netherlands). The anterior rectus sheath and the fascia of the internal and external transverses were closed using simple interrupted polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). Towards the end of both procedures, a Redon low-vacuum drain catheter was placed, which was guided outside the abdominal cavity approximately 5 cm from the incision. The skin was consequently closed using continuous monofilament nylon suture (Ethilon; Ethicon, Amersfoort, The Netherlands). All patients received a dose of 5,000 IU of sodium–heparin on the morning of the procedure as thrombosis prophylaxis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Pearson χ2 test was used for comparing percentages. In case of small expected numbers, a Fisher's exact test was performed. Continuous variables were analysed using the Mann–Whitney test. A P-value of 0.05 or less (two-sided) was considered to be statistically significant. Means and medians are expressed ±standard deviation (SD).\n\nBODY.METHODS.FOLLOW-UP:\nPatients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications, such as fistula, wound dehiscence and incisional hernia, after a minimum of 12 months follow-up. The patient and the surgeon evaluated the cosmetic results independently and were asked to rate the scar as unsatisfactory, satisfactory or fine. Furthermore, the length and width of the scar was measured.\n\nBODY.RESULTS.STUDY GROUP:\nSome 150 consecutive patients were randomised for participation in this study during an inclusion period from April 1977 until July 1979. Seventy-five patients received a transverse incision and 75 patients a midline incision (Fig. 1). One patient was withdrawn from further follow-up after developing peritonitis and consequent acute respiratory distress syndrome (ARDS) not related to the closure of the abdominal wall 2 days after surgery (transverse incision group). The patients' average age was 51.9 and 51.4 years for the midline and the transverse incision groups, respectively. Furthermore, no differences were found in the body mass and average length between the two groups (Table 1). A cholecystectomy was performed using a transverse incision in 52 patients and utilising a midline incision in 52 patients also. Fifteen and 16 patients, respectively, underwent a combined cholangiography/cholecystectomy. A further 7 and 6 patients, respectively, were treated with a cholangiography/cholecystectomy plus additional choledochotomy and the postexploratory placement of a T-tube.Table 1Baseline characteristics of the patients undergoing surgery, according to study groupVariableMidline incisionTransverse incisionn = 75n = 74Average age (years) ± SD51.9 ± 14.851.4 ± 13.8Average weight (kg) ± SD71.3 ± 14.568 ± 14.3Average length (cm) ± SD163.5 ± 7.8164 ± 7.3\n\nBODY.RESULTS.SURGEON:\nStaff surgeons performed 17% (13/75 patients) of all procedures performed through a midline incision. The remainder of the procedures through a midline incision was carried out under staff surgeon supervision. Staff surgeons performed 14% of all procedures in the transverse incisions study group (10/74 patients) and supervised the remainder. No statistically significant difference was found between the two randomised groups (P = 0.65).\n\nBODY.RESULTS.DURATION OF SURGERY:\nNo significant difference was noted in the skin-to-skin time (in min) for the two different incisions (Table 2). Surgery utilising midline and transverse incision took 56.9 ± 29.3 and 53.2 ± 26.8 min, respectively (P = 0.35). The total duration of the procedures until extubation (in min) did not differ between the midline and transverse incisions (71.0 ± 30.5 and 67.0 ± 27.3, respectively, P = 0.34).Table 2Length of incision, thickness of subcutaneous fat and skin-to-skin time, according to study groupVariableMidline incisionTransverse incisionP-valueLength of incision (mm) ± SDa164 ± 28140 ± 24<0.0001Thickness of subcutaneous fat (mm) ± SDa34.5 ± 13.030.3 ± 12.40.05Skin-to-skin time (min) ± SDa56.9 ± 29.353.2 ± 26.80.40Width of scar (mm) ± SDb8.3 ± 1.43.3 ± 1.2<0.0001aMeasured during surgery in 75 midline and 74 transverse incisionsbMeasured at follow-up in 63 and 60 midline and transverse incisions, respectively\n\nBODY.RESULTS.PAIN AND ANALGESICS:\nSignificantly more patients, having undergone a midline incision, reported pain on day one, two and three postoperatively (P < 0.0001, Table 3). In the midline incision group, 28/75 patients required no or only one dose of analgesics; the remainder required two or more doses. Thirty-one patients operated through a transverse incision required no analgesics or only one dose; 43 patients (the remainder) required two or more. No significant difference in the use of analgesics was found between the groups (P = 0.69).Table 3Postoperatively reported pain, according to study group, shown as the number of patients reporting pain at the time points indicated (percentage), with the remainder of patients reporting no painTime point after surgeryMidline incision n = 75Transverse incision n = 74P-valuePatients reporting pain, n (%)Patients reporting pain, n (%)3–4 h68 (91)60 (81)0.09First day64 (85)39 (53)<0.0001Second day57 (76)23 (31)<0.0001Third day28 (37)9 (12)<0.0001Fourth day5 (7)3 (4)0.72Fifth day0 (0)1 (1)0.50Sixth day0 (0)1 (1)0.50\n\nBODY.RESULTS.COMPLICATIONS:\nPostoperative complications (Table 4) were seen in 16 out of 75 patients (21%) from the midline incision group and in 15% from the transverse incision group (11 patients) (P = 0.30). Briefly, one patient in each group developed cardiac complications; 8 and 6 patients developed urinary retention after the midline and transverse incisions, respectively (P = 0.59). Surgical site infections were diagnosed in 7 and 3 patients, respectively (P = 0.33).Table 4Rate of complications after surgery, according to study group, shown as the number of patients diagnosed with complications (percentage)ComplicationMidline incisionTransverse incisionP-valuen = 75 n (%)n = 75 n (%)Cardiac1 (1)1 (1)1Urinary retention8 (12)6 (8)0.59ARDS01 (1)0.50Surgical site infection7 (9)3 (4)0.33Haemorrhage1 (1)00.50Pneumonia01 (1)0.50Total17 (23)12 (16)0.30\n\nBODY.RESULTS.DISCHARGE:\nForty-five (60%) and 42 (57%) patients from the patients having undergone a midline or a transverse incision, respectively, were discharged on day 6 or 7 postoperatively. The remaining patients from each group left hospital care on day 8 or later. The duration of hospital admission did not differ between the two types of incision (P = 0.74).\n\nBODY.RESULTS.COSMETICS:\nThe width and length of all incisions was measured during the follow-up visit (Table 2). The mean width of the scar after the healing of the midline incisions was found to be 8.3 ± 1.4 mm. The mean width of the scar after the healing of the transverse incisions was measured to be 3.3 ± 1.2 mm. This observed difference is significant (P < 0.0001). The length of the incisions was 140 ± 24 mm and 164 ± 28 mm for the transverse and the midline incisions, respectively. The difference in scar length was found to be significant (P < 0.0001).\n\nBODY.FOLLOW-UP:\nEighty-one percent of all patients operated through a transverse incision were seen during the follow-up examination (n = 60). Of the patients operated through a midline incision, 63 out of 75 were seen at the outpatient clinic (84%). The patients that were lost to follow-up could either not be traced or had deceased (Fig. 1). The minimum follow-up for the evaluation of cosmetic results and hernia incidence was 12 months and the maximum was 36 months.\n\nBODY.FOLLOW-UP.INCISIONAL HERNIA:\nFrom the patients that had undergone the procedure through a transverse incision, one (1/60; 2%) presented with an incisional hernia as opposed to 9 patients from the midline incision group (9/63; 14%); 95% confidence interval (CI) 7.5–25.4%. This difference in hernia incidence is significant (P = 0.017). No significant correlation was found between the incisional hernia rate and surgical site infection (P = 0.07).\n\nBODY.FOLLOW-UP.SUBJECTIVE COSMETICS:\nPatients and surgeons alike were asked to rate the appearance of the scar during the postoperative follow-up outpatient clinic visit. Both the surgeons and the patients found the scar resulting from the transverse incision to be more cosmetically pleasing (P < 0.0001 and P = 0.03, respectively, Table 5).Table 5Number of patients and surgeons rating the cosmetics of a scar at follow-upScoreMidline incision (n = 63)Transverse incision (n = 60)Patients, n (%)Surgeons, n (%)Patients, n (%)Surgeons, n (%)Unsatisfactory6 (10)25 (40)2 (3)6 (10)Satisfactory16 (25)27 (43)9 (15)12 (20)Fine41 (65)11 (17)49 (82)42 (70)Total63636060Difference between type of incision: patients P = 0.03; surgeons P < 0.0001\n\nBODY.DISCUSSION:\nThis prospective randomised study of transverse and midline incisions for open cholecystectomy shows that a significant reduction of incisional hernia incidence can be achieved through the use of a transverse incision. Only one other study (published in 1980) reported the incidence of incisional hernia after upper abdominal midline and unilateral transverse incision in a randomised trial. No difference between the two techniques (8 and 6% incisional hernia, respectively) was found, but the relatively short follow-up of 6 months, however, may be held accountable for this finding [11]. Three retrospective studies showed rates of incisional hernia of 3.2, 5.4 and 16.5% for midline incision and 1.3, 6.7 and 13.4% for transverse incision without statistically significant differences [12–14]. The possible reason for the rather high incidence of incisional hernia in the midline incision group (14%) may lie in the use of resorbable 910 polygalactin sutures. Nevertheless, the use of the same type of resorbable suture in the closure of the transverse incisions resulted in a 2% hernia rate. There is evidence for the importance of proper technique and choice of incision as a means to reduce incisional hernia being more important than the use of suture material [7]. Furthermore, as mentioned above, it is known that the incidence of incisional hernia in the case of a midline incision lies between 2 and 20%. From our data, the NNT (numbers needed to treat) is calculated to be 8 (95% CI 5–30) and the RRR (relative risk reduction) is 88% (95% CI 23–100%). Luijendijk et al. [15] have published a hernia rate of 2% after Pfannenstiel incisions closed using 910 polygalactin, which is in agreement with our findings in the patients randomised for a transverse incision, emphasising the importance of the incision over the choice of suture material. In our study, significantly fewer patients reported pain on day 1, 2 and 3 after transverse incisions, a result that was also described by other authors [16, 17]. Greenall et al. [18] published a contradictory report (in 1980) in which no significant difference in postoperative pain was found between midline and transverse incisions. The previously mentioned study, however, only analysed 46 out of 572 patients (8%) with regard to pain, which may explain the finding. In the same way, Lacy et al. suspended visual analogue pain scoring in a study comparing midline and transverse incisions for abdominal aortic surgery. Remarkably, the two groups in our study did not differ in terms of postoperative analgesia, a finding that is also reported by Lindgren et al. [17] and Donati et al. [19]. In our study, surgeons as well as patients were significantly more satisfied with the aesthetic appearance after a transverse in comparison with a midline incision. The scars after transverse incisions were found to be significantly shorter and less wide than the midline incisions, which may account for the observed difference. A possible reason for this is that a transverse incision is executed parallel to the prevailing direction of the skin lines on the abdomen and, therefore, the tension on the wound and consequent scar is low. Cholecystectomy has come a long way since this trial. The introduction and widespread acceptance of laparoscopic technique as the treatment of choice has rendered open cholecystectomy to be an operation for exceptional, and perhaps surgically difficult, circumstances. Nowadays, the study reported is hardly feasible, yet, the results are still applicable and very relevant for other surgical procedures in the (upper) abdomen. Knowledge of the favourable results of a transverse incision may aid surgeons in their choice when finding themselves in the unfortunate position of needing conversion to open cholecystectomy. In conclusion, this investigation on transverse incisions might be helpful in reducing the incidence of incisional hernia in patients after open cholecystectomy. The midline incision is a preferred manner to achieve exposure of the abdominal cavity and is considered to be easily performed and quick. Although the midline incision is generally accepted, the incidence of incisional hernias is surprisingly high [1–5]. The choice for a particular incision should not only be based on exposure, but also on hernia incidence reduction, especially since recurrence rates after hernia repair are reported to be very high. Furthermore, the recurrence rate after incisional hernia repair is a disappointing 63 and 32% for suture and mesh repair, respectively [6]. In the light of these results, incisional hernia prevention is warranted. In this investigation, it is shown that a significant reduction (from 14.5 to 1.7%) of incisional hernia incidence was achieved by using a transverse incision. Hence, a transverse incision should be considered as the preferred incision in acute and elective surgery of the upper abdomen in which laparoscopic surgery is not an option. Full exposure of two quadrants is feasible through the use of a unilateral transverse incision in, for example, biliary, bariatric, liver and colonic surgery. The transverse incision should be part of the abdominal surgeon's armamentarium and is a preferable incision to prevent the high incidence of incisional hernia after abdominal surgery.\n\n**Question:** Compared to Midline incision what was the result of Transverse incision on Smaller incision ?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
482
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Clinical and radiological evaluation of Trabecular Metal and the Smith–Robinson technique in anterior cervical fusion for degenerative disease: a prospective, randomized, controlled study with 2-year follow-up\n\n ABSTRACT:\nA prospective, randomized, controlled study was carried out to compare the radiological and clinical outcomes after anterior cervical decompression and fusion (ACDF) with Trabecular MetalTM (TM) to the traditional Smith–Robinson (SR) procedure with autograft. The clinical results of cervical fusion with autograft from the iliac crest are typically satisfactory, but implications from the donor site are frequently reported. Alternative materials for cervical body interfusion have shown lower fusion rates. Trabecular Metal is a porous tantalum biomaterial with structure and mechanical properties similar to that of trabecular bone and with proven osteoconductivity. As much as 80 consecutive patients planned for ACDF were randomized for fusion with either TM or tricortical autograft from the iliac crest (SR) after discectomy and decompression. Digitized plain radiographic images of 78 (98%) patients were obtained preoperatively and at 2-year follow-up and were subsequently evaluated by two senior radiologists. Fusion/non-fusion was classified by visual evaluation of the A–P and lateral views in forced flexion/extension of the cervical spine and by measuring the mobility between the fused vertebrae. MRI of 20 TM cases at 2 years was successfully used to assess the decompression of the neural structures, but was not helpful in determining fusion/non-fusion. Pain intensity in the neck, arms and pelvis/hip were rated by patients on a visual analog scale (VAS) and neck function was rated using the Neck Disability Index (NDI) the day before surgery and 4, 12 and 24 months postoperatively. Follow-ups at 12 and 24 months were performed by an unbiased observer, when patients also assessed their global outcome. Fusion rate in the SR group was 92%, and in the TM group 69% (P < 0.05). The accuracy of the measurements was calculated to be 2.4°. Operating time was shorter for fusion with TM compared with autograft; mean times were 100 min (SD 18) and 123 min (SD 23), respectively (P = 0.001). The patients' global assessments of their neck and arm symptoms 2 years postoperatively for the TM group were rated as 79% much better or better after fusion with TM and 75% using autograft. Pain scores and NDI scores were significantly improved in both groups when compared with baseline at all follow-ups, except for neck pain at 1 year for the TM group. There was no statistically significant difference in clinical outcomes between fusion techniques or between patients who appeared radiologically fused or non-fused. There was no difference in pelvic/hip pain between patients operated on with or without autograft. In our study, Trabecular Metal showed a lower fusion rate than the Smith–Robinson technique with autograft after single-level anterior cervical fusion without plating. There was no difference in clinical outcomes between the groups. The operative time was shorter with Trabecular Metal implants.\n\nBODY.INTRODUCTION:\nAnterior surgery of the cervical disc with fusion using autograft from the iliac crest was introduced in the 1950s [10, 40]. The clinical results are typically satisfactory, with at least 75–80% of the patients satisfied, reporting reduced pain intensity, improved function and neurological restitution [9, 24, 27, 30, 32, 35]. However, infections, hematomas and longstanding pain [12, 25, 28, 33, 36, 37, 50] are frequently reported complications from the donor site. Allografts have been widely used, but imply risks of producing immunogenic response from the host, which might disturb fusion healing and involve the risk of transmitting infections [2, 5, 16, 26]. The risk is low, but can still be important if the infection is severe, as shown, e.g., when HIV is detected [7]. The ideal substitute for autograft should provide all three of its fundamental properties: osteogenicity, osteoconductivity and osteoinductivity. Several implants have been tested, but no ideal substitute or surgical method has been found [44]. Previously we have reported a low fusion rate for a cervical body interfusion with carbon fiber cage (Brantigan) [45]. Trabecular Metal (TM) is a porous tantalum biomaterial with structure and mechanical properties similar to trabecular bone (Fig. 1) and has been shown to be more osteoconductive than other commercially available biomaterials [4, 11, 22].Fig. 1Trabecular MetalTM implant The objectives of the study were to measure and compare the radiological and clinical outcomes of anterior cervical decompression and fusion (ACDF) with Trabecular Metal (TM) devices and the traditional Smith–Robinson (SR) procedure with autograft.\n\nBODY.METHODS:\nAll patients scheduled for single-level anterior cervical decompression and fusion (ACDF) and fulfilling the criteria for the study were consecutively invited to participate during the period February 2002 to September 2003. Five patients declined participation and the remaining 80 provided informed consent. Study inclusion criteria were cervical radiculopathy with or without myelopathy due to degenerative disc disease (including disc herniation and/or spondylosis) with compatible MRI and clinical findings. Exclusion criteria were previous cervical spine surgery, posttraumatic neck pain, inflammatory systemic disease, another neurological disease and drug or alcohol abuse. No patient had spontaneous fusion at the adjacent segments. A flowchart of the study is presented in Fig. 2.Fig. 2Flowchart for the study After discectomy, the subchondral bony end plates were roughened by the burr until they bled, taking care that they would be able to function as a bearing surface for the implant. The posterior longitudinal ligament was removed in the majority of cases, and consequently osteophytes, if present, were removed. After decompression had been completed, randomization to fusion group was performed in the theater by a nurse using sealed envelopes. This late randomization was used to avoid surgeons' bias to treatment group during as much as possible of the surgical procedure. An implant size that could be positioned between the end plates by light tapping was chosen. The tricortical autografts were taken from the iliac crest using a saw with a twin blade. A subcutaneous catheter was placed at the donor site for administration of ropivacaine hydrochloride (NaropR) 2 or 3 days postoperatively, to reduce the pain. All patients used a soft collar for 6 weeks postoperatively. The randomization procedure yielded similar group distributions of age, gender and smoking habits (Table 1). The operated segment was C3/4 in 2 patients (both SR), C4/5 in 4 patients (2 TM, 2 SR), C5/6 in 50 patients (26 TM, 24 SR), C6/7 in 23 patients (11 TM, 12 SR) and C7/T1 in 1 patient fused with TM. See Table 2 for implant size. The operations were performed by five senior surgeons, and 70 out of the 80 patients were operated on by one of the two authors (HL, LV).Table 1Patient dataTotalTMSRAge median (range) (years)49 (27–70)48 (38–59)49 (27–70)Sex, male (no.)502426Smokers (no.)261610Myelopathy* (no.)422* All had radiculopathyTable 2Trabecular Metal implant sizeSize H × D × W (mm)No. of patients5 × 11 × 1415 × 14 × 1416 × 11 × 1466 × 14 × 14127 × 11 × 1467 × 14 × 14118 × 14 × 143Total40\n\nBODY.METHODS.CLINICAL FOLLOW-UP:\nPain intensity in the neck, arms and pelvis/hip were rated by patients on a visual analog scale (VAS, 0–100), and neck function was rated using the Neck Disability Index [46] (NDI, 0–100) the day before surgery, and 4, 12 and 24 months postoperatively. Pain drawings were obtained at the same intervals. Follow-ups at 12 and 24 months were performed by an unbiased observer (ME), and patients also assessed their global outcomes at these same follow-up intervals.\n\nBODY.METHODS.RADIOLOGICAL FOLLOW-UP:\nDigitized plain radiographic images of 78 (98%) patients were obtained preoperatively, immediately postoperatively and at 2-year follow-up, and were subsequently evaluated by two senior radiologists. Consensus about fusion or non-fusion was reached after the first evaluation in 49 cases (62%) and after the second evaluation in the remaining 29 cases (38%). The second measurement was used to calculate intra-observer variability. The entire data set was analyzed to assess inter-observer variability of radiographic measurements and associated precision of the measurements. Fusion/non-fusion was classified by visual evaluation of the A–P and lateral views in forced flexion/extension of the cervical spine, i.e., (1) the presence/absence of bone-bridging or interface lucencies between TM and bone, and (2) by measuring the differences between the angles of the spinal processes of the fused vertebrae at flexion and extension. Fusion was classified as either clearly fused (I), probably fused (IIA), probably non-fused (IIB) and clearly non-fused (III) (Figs. 3, 4, 5). Finally, the material was dichotomized so that groups I and IIA were combined to fused, and groups IIB and III to non-fused. The same classification had been used in a previous study of the Brantigan carbon fiber cage [45]. For classification in group I, radiological signs of bone bridging were required and mobility of up to 1.0° was accepted. Cases classified in group II had uncertain signs of bone fusion. Group IIA had mobility of 2.0° or less and group IIB had more than 2.0°. Group III required both the absence of bone bridging and mobility of 3.0° or more.Fig. 3Clearly fused (group I) after surgery with autograft (SR)Fig. 4Clearly fused (group I) after surgery with Trabecular Metal (TM)Fig. 5Clearly non-fused (group III) after surgery with Trabecular Metal (TM). Note the radiolucent zone above the implant. Mobility is seen between images in flexion and extension MRI was performed on 20 consecutive TM cases at 2-year follow-up. Several parameter sets suggested for TM in the published literature were tested [21, 47] in addition to our standard protocol for the degenerative cervical spine, and the parameter sets below were ultimately chosen. The scans were performed on a Siemens Vision 1.5 T MRI scanner using a cervical spine coil with a protocol consisting of T1-sagittal images (TR 500, TE 12, se), T2/PD sagittal images (TR 4000, TE 128, tse and TR 1300, TE 120, se), PD sagittal images (TR 1300, TE 60, se), T1 axial images (TR 600, TE 15, se), T2 axial images (TR 620, TE 10, 25°, Fl2d), T2 oblique images (TR 4000, TE 128, tse), T2 coronal images (TR 1485, TE 120, se) and finally PD coronal images (TR 1485, TE 60, se). Slice thickness was 4 mm in all images.\n\nBODY.METHODS.RADIOLOGICAL FOLLOW-UP.STATISTICAL METHODS:\nA rank-invariant non-parametric method for analysis of pairs ordered categorical data was used to compare the pain ratings (VAS) and NDI for the groups. The method makes it possible to separately measure order-preserved individual changes attributable to the group change, as well as an individual change in category that is different from the change of pattern in the group [41, 42]. Clinically relevant improvement, set at 10, was calculated for VAS and NDI. χ2 tests were used to compare the groups. Student's t test was used to analyze the operative and hospital time. Fischer's exact test was used to compare the fusion rate in the groups. Inter- and intra-observer correlation was calculated using kappa analysis. A value of P < 0.05 was considered to be statistically significant.\n\nBODY.RESULTS.SURGERY:\nOperation times were shorter for fusion with TM as compared to autograft; mean times were 100 min (SD 18) and 123 min (SD 23), respectively (P = 0.001). There was no difference in intra-operative bleeding between the implant groups. Of the 80 patients, 72 had less than 50 ml of bleeding. There was no statistically significant difference in the hospital time between the groups; TM mean was 3.6 days (SD 1.1) and SR mean, 4.1 days (SD 1.7) (P = 0.18).\n\nBODY.RESULTS.CLINICAL OUTCOME:\nFor patients receiving TM, the maximal pain (VAS) was reduced from median 57 in the neck and 45 in the arm before surgery, to 40 and 14 at 1 year and to 41 and 24 at 2 years postoperatively. In the group with autografts, the corresponding VAS ratings were reduced from median 66 in the neck and 60 in the arm before surgery to 36 and 28 at 1 year, and to 24 and 28 after 2 years (Fig. 6a, b). The number of patients showing clinically relevant improvement (set to at least 10 for VAS) in neck pain was 39% with TM and 63% with SR (P = 0.07), and in arm pain 50 and 58%, respectively.Fig. 6Pain rating (VAS) and Neck Disability Index preoperatively and at the follow-ups. The box plots illustrate the 25th and 75th percentiles with the median value marked in between. The range is shown by whiskers, but extreme outliers are separately shown by circles. There were no statistically significant differences between the surgical methods NDI improved from median 36 preoperatively to 30 after 2 years in patients with TM, and from 44 to 25 in the SR group (Fig. 6c). Clinically relevant improvement in NDI (set to at least 10) was found in 53% of the patients with TM and 61% with SR. The patients' global assessment of their neck and arm symptoms 2 years postoperatively for the TM group were: 41% much better, 38% better, 10% unchanged, 8% worse, and (one patient) 3% much worse. In the SR group, the assessments were: 42% patients much better, 33% better, 13% unchanged and 12% worse, i.e., 79% were much better or better after fusion with TM and 75% using autograft. At all follow-ups of 4, 12 and 24 months, pain scores (VAS) in both neck and arm, and NDI scores were significantly improved in both groups when compared with baseline, except for neck pain (VAS) at 12 months in patients fused with TM (P = 0.06). No statistically significant difference was found between the Trabecular Metal and autograft techniques for pain scores, NDI or the patients' global assessments, and at all follow-up intervals. A trend toward a higher proportion of patients with clinically relevant improvement in neck pain (at least 10 mm VAS) was measured after 2 years in patients with autografts (P = 0.07). The clinical results and corresponding P values are presented in Table 3.Table 3The clinical results and corresponding P valuesTMSRP valueMedianq1–q3Medianq1–q3TM vs. SRTM vs. baselineSR vs. baselineMaximal neck pain (VAS)Preoperative5733–726629–774 months224–55173–500.40.0002<0.0011 year403–63363–710.60.060.0052 years415–74246–660.60.0480.002Maximal arm pain (VAS)Preoperative4518–776027–734 months62–2570–340.9<0.0001<0.0011 year142–62283–720.40.0060.0462 years240–72284–630.80.0340.008Maximal pelvic pain (VAS)Preoperative00–2120–54 months00–420–120.20.50.91 year00–420–120.20.30.62 years00–1500–80.80.50.8Neck Disability IndexPreoperative3625–474430–511 year288–44279–490.80.0050.0012 years3012–47258–440.80.0010.001Clinically relevant improvement inNeck pain (at least 10 for VAS)39%63%0.07Arm pain (at least 10 for VAS)50%58%0.6NDI (at least 10)53%60%0.6Patients' global assessment at 2 years (%)Much better4142Better3833Unchanged1013Worse812Much worse30Better/much better79750.5Bold values are P values < 0.05 No differences in clinical outcomes were seen between patients who appeared radiologically fused or non-fused (P = 0.6). There was a tendency toward poorer clinical outcome for smokers compared with non-smokers, estimated by the patients' global assessments (P = 0.07).\n\nBODY.RESULTS.PELVIC PAIN:\nThere was no difference in pelvic/hip pain (at the donor site) preoperatively and at 4, 12 or 24 months, between patients fused with and without autograft. Further analysis of the pain drawings showed eight patients with markings at the right iliac crest (four SR, four TM). However, the majority had marked this as related to the pain caused by lumbago/sciatica or generalized pain. Only one patient, who had been fused with TM, marked localized pain in this area. In summary, no remaining donor site pain was marked in the pain drawings, and none was seen in the VAS scoring.\n\nBODY.RESULTS.COMPLICATIONS.FURTHER SURGERY:\nThree patients were reoperated: two of them because of non-fusion and one due to graft dislocation. All had been primarily fused with autografts (SR). They were all clearly fused 2 years postoperatively. One patient fused with TM was operated on at the adjacent segment after 19 months. The only patient with remaining symptoms due to complications 2 years after surgery had a sensory deficit below the donor site at the iliac crest (SR). Further complications in the SR group included three patients with wound infections at the iliac crest, and one of them with an infected hematoma. One patient had pneumonia and one had a lower urinary tract infection (cystitis). All infections were cured after antibiotic treatment. One patient developed a fissure in the autograft during the primary surgery. A plate was added to the fixation directly, and the fusion healed without further complications. Among the patients fused with TM, two had transient hoarseness, and one of them also had swallowing disturbances. One patient was treated with antibiotics for a urinary tract infection. In summary, nine patients in the SR group and three patients in the TM group had complications, but only one patient (SR) had symptoms 2 years after surgery.\n\nBODY.RESULTS.RADIOLOGICAL OUTCOME:\nThe fusion rate shown by the radiological analysis is presented in Table 4. All patients in group III (clearly non-fused) showed at least 4° of mobility (the limit for the group set by the classification was 3.0°). There was no statistically significant difference in the fusion rate between smokers (92%) and non-smokers (74%) (P = 0.2). Smokers operated on with TM showed an 87% fusion rate. Kappa analysis showed 0.63 and 0.66 for the intra-observer correlation, and 0.58 for inter-observer correlation.Table 4Radiological fusionFusion groupTMSRNo.PercentageNo.PercentageI (clearly fused)18463589IIA (probably fused)92313IIB (probably non-fused)51300III (clearly non-fused)71838Fusion rate (I + IIA)6992\n\nBODY.RESULTS.MAGNETIC RESONANCE IMAGING:\nMRI, of 20 TM cases was successfully used to assess decompression of the neural structures, but was not helpful in determining fusion/non-fusion attributable to metal artifacts in the area immediately surrounding the implants.\n\nBODY.DISCUSSION:\nAutograft is referred to as the gold standard for spinal fusion [43] due to its unique combination of osteogenicity, osteoconductivity and osteoinductivity. We had hypothesized that similarly high fusion rates for TM as for autograft could be obtained, but without the risk of complications from the donor site associated with autograft. The fusion rate of carbon fiber cages used in the treatment for the degenerative cervical spine was 62% in our previous study of the Brantigan cage [45], which led us to discontinue the use of the device. TM cages were chosen for the present study because of the unique microstructure of the material and because of the published affinity of osteocytes to tantalum metal [22]. These factors were hypothesized to promote bone ingrowth and enhance fusion. The fusion rate for TM in the present study was higher than for the carbon fiber cages, but lower than that of the SR group. In a recent study, Fernández-Fairen et al. [14] compared TM used as a stand-alone cage with autograft used with plate. The fusion rate was 89% for TM and 85% for autograft with plate. No statistically significant difference in radiological fusion or in clinical outcome was found between the groups. Criteria for fusion were that \"segments were deemed fused when there was evidence of bony bridging around the implant and/or <2° of variation of Cobb's angle on F/E radiographs or <2 mm of variation in the interspinous distance, in the absence of periimplant radiolucency\". We had similar criteria for fusion, besides measuring the movement between the spinal processes. The criterion accepting <2 mm movement is probably wider compared to the criterion of <2° variation of the angle [8, 13]. With less stringent criteria for fusion (more motion allowed), the apparent fusion rates increase, as demonstrated by Fasset [13]. Smokers were excluded from the study by Fernández-Fairen et al. [14], while we had 40% smokers in the group fused with TM. We found no statistically significant difference between smokers and non-smokers in our study, but it is still possible that smoking had some adverse influence on the fusion healing. On the other hand, the fusion rate in our control group with autograft was 92%, while it was 85% after autograft with plate in the study by Fernandez et al., though 25% of our patients operated on with autograft were smokers. Our patients were not randomized to fusion group until the major part of the surgical procedure including the decompression was completed, which reduced the risk for surgeons' bias, while the preoperative randomization in the study by Fernandez et al. might have had an adverse influence on the control group. Wigfield et al. [49] have presented a study with tantalum interbody implant, where inclusion of patients was halted after radiographs 6 weeks postoperatively had shown inferior end-plate lucency, raising concerns about delayed fusion or non-fusion. However, fusion was subsequently noted in all 15 patients available for follow-up at 12 months of the 17 patients operated on with tantalum implant, but the study numbers were too small for statistical analysis. Fusion was defined as less than 4° angulation between flexion and extension radiographs and absence of radiolucency extending over more than 50% of the implant/end plate interface. Baskin and Travnelis [3] compared TM with autograft in an RCT that was terminated due to concerns over delayed fusion after 39 patients had been enrolled. Of the 28 patients operated on with TM, 6 out of 16 patients (37%) who were examined with radiographs at 24 months were fused. A low fusion rate with TM was found by Zoëga and Lind [51] as well. Two years after ACDF with TM cage, none of the 13 patients had fusion. Those authors used radiostereometric analysis (RSA) for the follow-up, which is a very sensitive method for detecting motion [17, 29, 53]. Clinical outcome data showed no statistical difference between non-fused and fused patients in the present study. Earlier studies have shown divergent results concerning correlation between fusion and clinical outcome, with some pointing to the importance of the fusion for the clinical outcome [9, 27, 48] and others denying such a connection [12, 25, 31]. Addressing the fusion rate alone (without considering the clinical outcome), the use of TM as stand-alone device does not seem sufficient. The fusion rate with TM might be enhanced with an anterior plate, considering published results of TM with and without pedicle screws used in the porcine lumbar spine [54], as well as fusion rates for TM with allograft and anterior plate [35]. The use of an anterior plate in these studies suggests that initial stability may be an important factor in achieving fusion [36]. Because of the results obtained, we now use TM together with an anterior plate. Because our earlier study of the Brantigan cage showed closer correlation between radiological fusion and clinical outcome 5 years postoperatively as compared with 2 years [27], the present study will be extended. It has been advocated that a fusion cage can avoid subsidence better than an autograft, due to collapse of the latter. Some studies support this [45], while others show similar subsidence with the cage as well [15, 23]. This question was not addressed in the present study, where the radiological evaluation focused on whether the operated segment was fused or not. The accuracy of measurements of motion on digitized radiograph images was considerably higher in the present study (2.4°, 95% CI) than in our previous experience measuring on conventional radiographic films. The accuracy of measurements on conventional radiograph images has previously been estimated at 5° [17] and the cutoff for mobility has been set at 4° for studies of cervical implants. The described method using digitized radiograph images has reduced the difference in accuracy compared to the much more complex and expensive radiostereometry (RSA). We estimated the accuracy for RSA in the cervical spine at about 1° in a previous study [25, 29], which is less accurate than in the lumbar spine, mainly owing to the small size of the cervical vertebral bodies. With distortion-compensated roentgen analysis (DCRA), another technique for computerized analysis of conventional radiographs, Leivseth found an error of 2.4° [20]. MRI of 20 TM cases was successfully used to assess decompression of the neural structures, but was not helpful in determining fusion/non-fusion. The artifacts from the implants could be limited to the structures immediately surrounding the implants. Hence, the spinal canal and the foraminae could be visualized, and the decompression assessed, but interpretation of the interface between implant and vertebral body was disturbed. This is in contrast to the experience in the lumbar spine [personal communication, D Robertson] and is mainly due to the smaller size of the cervical vertebrae. The primary advantage of using an implant rather than autograft bone for ACDF is that it avoids complications associated with the donor site. Several previous studies have reported persistent pain in 15–40% of the patients 2 years after surgery [6, 12, 25, 33, 37, 50], though some studies show that this is less frequent [1, 34]. In our earlier study of the Brantigan cage, we found more donor site pain immediately postoperatively when using a conventional graft from the iliac crest as compared with using a percutaneous technique [45]. In the present study, no residual donor site pain was found at 4 months or later after surgery. In the early postoperative period, donor site pain is frequent, and it was experienced by our patients, but no assessment of the pain was made in this period. The administration of ropivacaine hydrochloride (NaropR) subcutaneously for the first few postoperative days resulted in pain reduction in our patients. This postoperative pain reduction might have reduced the tendency to persisting pain as well, due to less central sensitization caused by the postoperative pain [18, 19]. Sing et al. [38] have shown the good effects of continuous local anesthetic infusion on the acute graft-related pain as well as a remaining effect 4 years postoperatively [39]. In both groups in our study, 10% of the patients marked the pelvic/hip region on the pain drawing at 2 years, which illustrates the importance of having a control group for all follow-ups. Patients operated on with autografts were at risk of rare complications, such as neuralgia, although this did not occur in the moderate number of studied patients. Despite absence of chronic pain, we still found donor site morbidity; one patient had lasting sensory disturbance and three were treated for local infections. The clinical outcome in the present study showed 28 and 22 mm reduction in pain rating (VAS) in the neck and arm, respectively, 12 points improvement in NDI, and improvement for 77% of the patients in the global assessment. This is in accordance with earlier studies [9, 24, 25, 30, 32, 35, 45, 52].\n\nBODY.CONCLUSIONS:\nThis study of uninstrumented single-level ACDF showed a lower fusion rate with Trabecular Metal than with the Smith–Robinson technique with autograft after single-level anterior cervical fusion without plating. There were no differences in the clinical outcomes between the groups, and there were no differences in outcomes between patients who appeared radiologically fused or non-fused. The operating time was shorter with Trabecular Metal implants. No remaining donor site pain at the iliac crest was seen at 4 months or later.\n\n**Question:** Compared to anterior cervical decompression and fusion (ACDF) with Trabecular MetalTM (TM) what was the result of raditional Smith–Robinson (SR) procedure with autograft on clinical outcomes?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
367
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 320/9 μg on Overall use of daily rescue medication improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
402
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Incisional hernia after upper abdominal surgery: a randomised controlled trial of midline versus transverse incision\n\n ABSTRACT.OBJECTIVES:\nTo determine whether a transverse incision is an alternative to a midline incision in terms of incisional hernia incidence, surgical site infection, postoperative pain, hospital stay and cosmetics in cholecystectomy.\n\nABSTRACT.SUMMARY BACKGROUND DATA:\nIncisional hernias after midline incision are commonly underestimated but probably complicate between 2 and 20% of all abdominal wall closures. The midline incision is the preferred incision for surgery of the upper abdomen despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia. A RCT was preformed in the pre-laparoscopic cholecystectomy era the data of which were never published.\n\nABSTRACT.METHODS:\nOne hundred and fifty female patients were randomly allocated to cholecystectomy through midline or transverse incision. Early complications, the duration to discharge and the in-hospital use of analgesics was noted. Patients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications such as fistula, wound dehiscence and incisional hernia after a minimum of 12 months follow-up.\n\nABSTRACT.RESULTS:\nTwo percent (1/60) of patients that had undergone the procedure through a transverse incision presented with an incisional hernia as opposed to 14% (9/63) of patients from the midline incision group (P = 0.017). Transverse incisions were found to be significantly shorter than midline incisions and associated with more pleasing appearance. More patients having undergone a midline incision, reported pain on day one, two and three postoperatively than patients from the transverse group. The use of analgesics did not differ between the two groups.\n\nABSTRACT.CONCLUSIONS:\nIn light of our results a transverse incision should, if possible, be considered as the preferred incision in acute and elective surgery of the upper abdomen when laparoscopic surgery is not an option.\n\nBODY.INTRODUCTION:\nThe rate of incisional hernia after midline incision is commonly underestimated but probably lies between 2 and 20% [1–5]. Thus, incisional hernia is a major postoperative problem. The treatment of incisional hernia is complicated by high rates of recurrences. Recently, in a randomised controlled trial published by Burger et al. [6], midline incisional hernia repair has been shown to be associated with a 10-year cumulative recurrence rate of 63 and 32% for suture and mesh repair, respectively. The midline incision is the preferred incision for surgery of the upper abdomen, despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia [7]. Various approaches to opening the abdomen have been advocated over time. The choice for a certain incision is dependent on the exposure necessary for the desired procedure to succeed. A midline incision, be it supraumbilical, infraumbilical or both, is an approach especially suited for emergency and exploratory surgery because of the quick and generous exposure that can be achieved within a few minutes [8, 9]. The avascular nature of the linea alba minimises blood loss during this procedure. A supraumbilical transverse incision may be utilised in case exposure of the upper abdomen is desired. During this incision, the damage inflicted to the segmental arteries and nerves is previously described as being minimal [10]. Previously, only one randomised controlled trial, comparing transverse and true midline incisions, has been published specifically addressing incisional hernia incidence [11]. To determine whether the use of a transverse incision is an alternative to a midline incision for open cholecystectomy in terms of incisional hernia incidence, surgical site infection, postoperative pain and hospital stay, this randomised controlled trial was performed. This trial was conducted in an era when laparoscopic cholecystectomy was not yet available. The possibility of low incisional hernia rates after transverse incisions and the fact that little is known about potential advantages incited us to publish the relevant results of this randomised controlled trial which has been performed in the past and has only been reported in a Dutch thesis by one of the authors (H.L.). The primary endpoint of this study was the incisional hernia incidence after 12 months of follow-up. Secondary endpoints included pain and cosmetic appearance.\n\nBODY.METHODS.PROTOCOL:\nSome 150 consecutive female patients were randomly assigned to a midline or transverse incision as an approach for elective cholecystectomy or combined cholecystectomy and cholangiography (with or without consecutive choledochotomy) (75 and 75 patients, respectively). Emergency procedures were excluded from participation. The sample size is based on an incisional hernia rate reduction from 20 to 6% at a power of 80% and an error rate of 5%. Obtaining informed consent was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975. The investigation reported was performed with informed consent from all of the patients and followed the guidelines for experimental investigation with human subjects and was approved by the medical ethics committee. An independent statistician prepared closed, tamper-proof envelopes containing the random allocation (Fig. 1). Patients were randomised for one of the procedures in theatre through the opening of the envelopes.Fig. 1Flow chart of patient inclusion and follow-up Patient-related factors that were recorded were age, body mass and length and date of operation. Operation-related factors that were recorded were the exact nature of the operation, length of the incision, the thickness of the subcutaneous fat, surgeon performing the procedure, as well as the duration of the operation (skin-to-skin time). In the immediate postoperative period, the use, dose and type of analgesics was recorded and a pain score was administered. The use of analgesics (morphine 7.5 mg intra-muscular injection, 4 h minimum interval between consecutive injections) was monitored for 48 h after surgery; the pain score was administered for the first 6 days after surgery. In patients assigned to surgery through a midline incision, the skin was incised from just below the xyphoid process to just above the umbilicus. The abdominal wall was opened in the midline by incising the linea alba. A Collin type (two-bladed) self-retaining retractor was used to maintain exposure. The abdominal wall was closed in one layer using single polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). The skin was consequently closed using running monofilament nylon sutures (Ethilon; Ethicon, Amersfoort, The Netherlands). Patients randomised for a transverse incision received a right-sided unilateral transverse incision between 3 and 4 cm below the costal margin. The rectus muscle was incised. The fibres of the external and internal obliques and the transverse muscles were separated in the direction of their course. Exposure was achieved through the use of a manually held single-bladed retractor. Closure of the abdominal wall was achieved by closure of the peritoneum and the posterior rectus fascia using a continuous, polygalactin 910 suture (Vicryl; Ethicon, Amersfoort, The Netherlands). The anterior rectus sheath and the fascia of the internal and external transverses were closed using simple interrupted polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). Towards the end of both procedures, a Redon low-vacuum drain catheter was placed, which was guided outside the abdominal cavity approximately 5 cm from the incision. The skin was consequently closed using continuous monofilament nylon suture (Ethilon; Ethicon, Amersfoort, The Netherlands). All patients received a dose of 5,000 IU of sodium–heparin on the morning of the procedure as thrombosis prophylaxis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Pearson χ2 test was used for comparing percentages. In case of small expected numbers, a Fisher's exact test was performed. Continuous variables were analysed using the Mann–Whitney test. A P-value of 0.05 or less (two-sided) was considered to be statistically significant. Means and medians are expressed ±standard deviation (SD).\n\nBODY.METHODS.FOLLOW-UP:\nPatients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications, such as fistula, wound dehiscence and incisional hernia, after a minimum of 12 months follow-up. The patient and the surgeon evaluated the cosmetic results independently and were asked to rate the scar as unsatisfactory, satisfactory or fine. Furthermore, the length and width of the scar was measured.\n\nBODY.RESULTS.STUDY GROUP:\nSome 150 consecutive patients were randomised for participation in this study during an inclusion period from April 1977 until July 1979. Seventy-five patients received a transverse incision and 75 patients a midline incision (Fig. 1). One patient was withdrawn from further follow-up after developing peritonitis and consequent acute respiratory distress syndrome (ARDS) not related to the closure of the abdominal wall 2 days after surgery (transverse incision group). The patients' average age was 51.9 and 51.4 years for the midline and the transverse incision groups, respectively. Furthermore, no differences were found in the body mass and average length between the two groups (Table 1). A cholecystectomy was performed using a transverse incision in 52 patients and utilising a midline incision in 52 patients also. Fifteen and 16 patients, respectively, underwent a combined cholangiography/cholecystectomy. A further 7 and 6 patients, respectively, were treated with a cholangiography/cholecystectomy plus additional choledochotomy and the postexploratory placement of a T-tube.Table 1Baseline characteristics of the patients undergoing surgery, according to study groupVariableMidline incisionTransverse incisionn = 75n = 74Average age (years) ± SD51.9 ± 14.851.4 ± 13.8Average weight (kg) ± SD71.3 ± 14.568 ± 14.3Average length (cm) ± SD163.5 ± 7.8164 ± 7.3\n\nBODY.RESULTS.SURGEON:\nStaff surgeons performed 17% (13/75 patients) of all procedures performed through a midline incision. The remainder of the procedures through a midline incision was carried out under staff surgeon supervision. Staff surgeons performed 14% of all procedures in the transverse incisions study group (10/74 patients) and supervised the remainder. No statistically significant difference was found between the two randomised groups (P = 0.65).\n\nBODY.RESULTS.DURATION OF SURGERY:\nNo significant difference was noted in the skin-to-skin time (in min) for the two different incisions (Table 2). Surgery utilising midline and transverse incision took 56.9 ± 29.3 and 53.2 ± 26.8 min, respectively (P = 0.35). The total duration of the procedures until extubation (in min) did not differ between the midline and transverse incisions (71.0 ± 30.5 and 67.0 ± 27.3, respectively, P = 0.34).Table 2Length of incision, thickness of subcutaneous fat and skin-to-skin time, according to study groupVariableMidline incisionTransverse incisionP-valueLength of incision (mm) ± SDa164 ± 28140 ± 24<0.0001Thickness of subcutaneous fat (mm) ± SDa34.5 ± 13.030.3 ± 12.40.05Skin-to-skin time (min) ± SDa56.9 ± 29.353.2 ± 26.80.40Width of scar (mm) ± SDb8.3 ± 1.43.3 ± 1.2<0.0001aMeasured during surgery in 75 midline and 74 transverse incisionsbMeasured at follow-up in 63 and 60 midline and transverse incisions, respectively\n\nBODY.RESULTS.PAIN AND ANALGESICS:\nSignificantly more patients, having undergone a midline incision, reported pain on day one, two and three postoperatively (P < 0.0001, Table 3). In the midline incision group, 28/75 patients required no or only one dose of analgesics; the remainder required two or more doses. Thirty-one patients operated through a transverse incision required no analgesics or only one dose; 43 patients (the remainder) required two or more. No significant difference in the use of analgesics was found between the groups (P = 0.69).Table 3Postoperatively reported pain, according to study group, shown as the number of patients reporting pain at the time points indicated (percentage), with the remainder of patients reporting no painTime point after surgeryMidline incision n = 75Transverse incision n = 74P-valuePatients reporting pain, n (%)Patients reporting pain, n (%)3–4 h68 (91)60 (81)0.09First day64 (85)39 (53)<0.0001Second day57 (76)23 (31)<0.0001Third day28 (37)9 (12)<0.0001Fourth day5 (7)3 (4)0.72Fifth day0 (0)1 (1)0.50Sixth day0 (0)1 (1)0.50\n\nBODY.RESULTS.COMPLICATIONS:\nPostoperative complications (Table 4) were seen in 16 out of 75 patients (21%) from the midline incision group and in 15% from the transverse incision group (11 patients) (P = 0.30). Briefly, one patient in each group developed cardiac complications; 8 and 6 patients developed urinary retention after the midline and transverse incisions, respectively (P = 0.59). Surgical site infections were diagnosed in 7 and 3 patients, respectively (P = 0.33).Table 4Rate of complications after surgery, according to study group, shown as the number of patients diagnosed with complications (percentage)ComplicationMidline incisionTransverse incisionP-valuen = 75 n (%)n = 75 n (%)Cardiac1 (1)1 (1)1Urinary retention8 (12)6 (8)0.59ARDS01 (1)0.50Surgical site infection7 (9)3 (4)0.33Haemorrhage1 (1)00.50Pneumonia01 (1)0.50Total17 (23)12 (16)0.30\n\nBODY.RESULTS.DISCHARGE:\nForty-five (60%) and 42 (57%) patients from the patients having undergone a midline or a transverse incision, respectively, were discharged on day 6 or 7 postoperatively. The remaining patients from each group left hospital care on day 8 or later. The duration of hospital admission did not differ between the two types of incision (P = 0.74).\n\nBODY.RESULTS.COSMETICS:\nThe width and length of all incisions was measured during the follow-up visit (Table 2). The mean width of the scar after the healing of the midline incisions was found to be 8.3 ± 1.4 mm. The mean width of the scar after the healing of the transverse incisions was measured to be 3.3 ± 1.2 mm. This observed difference is significant (P < 0.0001). The length of the incisions was 140 ± 24 mm and 164 ± 28 mm for the transverse and the midline incisions, respectively. The difference in scar length was found to be significant (P < 0.0001).\n\nBODY.FOLLOW-UP:\nEighty-one percent of all patients operated through a transverse incision were seen during the follow-up examination (n = 60). Of the patients operated through a midline incision, 63 out of 75 were seen at the outpatient clinic (84%). The patients that were lost to follow-up could either not be traced or had deceased (Fig. 1). The minimum follow-up for the evaluation of cosmetic results and hernia incidence was 12 months and the maximum was 36 months.\n\nBODY.FOLLOW-UP.INCISIONAL HERNIA:\nFrom the patients that had undergone the procedure through a transverse incision, one (1/60; 2%) presented with an incisional hernia as opposed to 9 patients from the midline incision group (9/63; 14%); 95% confidence interval (CI) 7.5–25.4%. This difference in hernia incidence is significant (P = 0.017). No significant correlation was found between the incisional hernia rate and surgical site infection (P = 0.07).\n\nBODY.FOLLOW-UP.SUBJECTIVE COSMETICS:\nPatients and surgeons alike were asked to rate the appearance of the scar during the postoperative follow-up outpatient clinic visit. Both the surgeons and the patients found the scar resulting from the transverse incision to be more cosmetically pleasing (P < 0.0001 and P = 0.03, respectively, Table 5).Table 5Number of patients and surgeons rating the cosmetics of a scar at follow-upScoreMidline incision (n = 63)Transverse incision (n = 60)Patients, n (%)Surgeons, n (%)Patients, n (%)Surgeons, n (%)Unsatisfactory6 (10)25 (40)2 (3)6 (10)Satisfactory16 (25)27 (43)9 (15)12 (20)Fine41 (65)11 (17)49 (82)42 (70)Total63636060Difference between type of incision: patients P = 0.03; surgeons P < 0.0001\n\nBODY.DISCUSSION:\nThis prospective randomised study of transverse and midline incisions for open cholecystectomy shows that a significant reduction of incisional hernia incidence can be achieved through the use of a transverse incision. Only one other study (published in 1980) reported the incidence of incisional hernia after upper abdominal midline and unilateral transverse incision in a randomised trial. No difference between the two techniques (8 and 6% incisional hernia, respectively) was found, but the relatively short follow-up of 6 months, however, may be held accountable for this finding [11]. Three retrospective studies showed rates of incisional hernia of 3.2, 5.4 and 16.5% for midline incision and 1.3, 6.7 and 13.4% for transverse incision without statistically significant differences [12–14]. The possible reason for the rather high incidence of incisional hernia in the midline incision group (14%) may lie in the use of resorbable 910 polygalactin sutures. Nevertheless, the use of the same type of resorbable suture in the closure of the transverse incisions resulted in a 2% hernia rate. There is evidence for the importance of proper technique and choice of incision as a means to reduce incisional hernia being more important than the use of suture material [7]. Furthermore, as mentioned above, it is known that the incidence of incisional hernia in the case of a midline incision lies between 2 and 20%. From our data, the NNT (numbers needed to treat) is calculated to be 8 (95% CI 5–30) and the RRR (relative risk reduction) is 88% (95% CI 23–100%). Luijendijk et al. [15] have published a hernia rate of 2% after Pfannenstiel incisions closed using 910 polygalactin, which is in agreement with our findings in the patients randomised for a transverse incision, emphasising the importance of the incision over the choice of suture material. In our study, significantly fewer patients reported pain on day 1, 2 and 3 after transverse incisions, a result that was also described by other authors [16, 17]. Greenall et al. [18] published a contradictory report (in 1980) in which no significant difference in postoperative pain was found between midline and transverse incisions. The previously mentioned study, however, only analysed 46 out of 572 patients (8%) with regard to pain, which may explain the finding. In the same way, Lacy et al. suspended visual analogue pain scoring in a study comparing midline and transverse incisions for abdominal aortic surgery. Remarkably, the two groups in our study did not differ in terms of postoperative analgesia, a finding that is also reported by Lindgren et al. [17] and Donati et al. [19]. In our study, surgeons as well as patients were significantly more satisfied with the aesthetic appearance after a transverse in comparison with a midline incision. The scars after transverse incisions were found to be significantly shorter and less wide than the midline incisions, which may account for the observed difference. A possible reason for this is that a transverse incision is executed parallel to the prevailing direction of the skin lines on the abdomen and, therefore, the tension on the wound and consequent scar is low. Cholecystectomy has come a long way since this trial. The introduction and widespread acceptance of laparoscopic technique as the treatment of choice has rendered open cholecystectomy to be an operation for exceptional, and perhaps surgically difficult, circumstances. Nowadays, the study reported is hardly feasible, yet, the results are still applicable and very relevant for other surgical procedures in the (upper) abdomen. Knowledge of the favourable results of a transverse incision may aid surgeons in their choice when finding themselves in the unfortunate position of needing conversion to open cholecystectomy. In conclusion, this investigation on transverse incisions might be helpful in reducing the incidence of incisional hernia in patients after open cholecystectomy. The midline incision is a preferred manner to achieve exposure of the abdominal cavity and is considered to be easily performed and quick. Although the midline incision is generally accepted, the incidence of incisional hernias is surprisingly high [1–5]. The choice for a particular incision should not only be based on exposure, but also on hernia incidence reduction, especially since recurrence rates after hernia repair are reported to be very high. Furthermore, the recurrence rate after incisional hernia repair is a disappointing 63 and 32% for suture and mesh repair, respectively [6]. In the light of these results, incisional hernia prevention is warranted. In this investigation, it is shown that a significant reduction (from 14.5 to 1.7%) of incisional hernia incidence was achieved by using a transverse incision. Hence, a transverse incision should be considered as the preferred incision in acute and elective surgery of the upper abdomen in which laparoscopic surgery is not an option. Full exposure of two quadrants is feasible through the use of a unilateral transverse incision in, for example, biliary, bariatric, liver and colonic surgery. The transverse incision should be part of the abdominal surgeon's armamentarium and is a preferable incision to prevent the high incidence of incisional hernia after abdominal surgery.\n\n**Question:** Compared to Midline incision what was the result of Transverse incision on Urinary retention?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
283
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomised phase II evaluation of irinotecan plus high-dose 5-fluorouracil and leucovorin (ILF) \n\n ABSTRACT:\nAn open-label randomised comparison of efficacy and tolerability of irinotecan plus high-dose 5-fluorouracil (5-FU) and leucovorin (LV) (ILF) with etoposide plus 5-FU/LV (ELF) in patients with untreated metastatic or locally advanced gastric cancer. One cycle of ILF comprised six once-weekly infusions of irinotecan 80 mg m−2, LV 500 mg m−2, 24-h 5-FU 2000 mg m−2, and ELF comprised three once-daily doses of etoposide 120 mg m−2, LV 300 mg m−2, 5-FU 500 mg m−2. In all, 56 patients received ILF and 58 ELF. Median age was 62 years, Karnofsky performance 90%, and disease status was comparable for both arms. The objective clinical response rates after 14 weeks treatment (primary end point) were 30% for ILF and 17% for ELF (risk ratio (RR) 0.57, 95% confidence interval (CI) 0.29–1.13, P=0.0766). Overall response rates over the entire treatment period for ILF and ELF were 43 and 24%, respectively (RR 0.56, 95% CI 0.33–0.97; P=0.0467). For ILF and ELF, respectively, median progression-free survival was 4.5 vs 2.3 months, time to treatment failure was 3.6 vs 2.2 months (P=0.4542), and overall survival was 10.8 vs 8.3 months (P=0.2818). Both regimens were well tolerated, the main grade 3/4 toxicities being diarrhoea (18%, ILF) and neutropenia (57%, ELF). The data from this randomised phase II study indicate that ILF provides a better response rate than ELF, and that ILF should be investigated further for the treatment of metastatic gastric cancer.\n\nBODY:\nGastric cancer is the fourth most common cancer in Europe and the third leading cause of cancer mortality (Bray et al, 2002). Although gastric cancer has declined over the past 50 years, the incidence of tumours at the gastro-oesophageal junction has increased (Blot et al, 1991). The use of chemotherapy for the management of patients with advanced gastric cancer, who have limited treatment options (Hohenberger and Gretschel, 2003), has only become widely acceptable over the last 20 years (Glimelius et al, 1997; Murad et al, 1993; Pyrhonen et al, 1995). 5-Fluorouracil (5-FU), usually in combination with leucovorin (LV, also referred to as folinic acid), forms the basis of most chemotherapy regimens used for the treatment of gastric cancer. A randomised phase III trial compared combinations of 5-FU with other active drugs in advanced gastric cancer: etoposide, LV and 5-FU (ELF) vs infusional 5-FU plus cisplatin (FUP) vs 5-FU, doxorubicin and methotrexate (FAMTX) (Vanhoefer et al, 2000). The overall response rates (ORRs) ranged from 9% (ELF) to 20% (FUP) and median survival times were between 6.7 months (FAMTX) and 7.2 months (both ELF and FUP). The observed differences were not statistically significant and there is still no definitive regimen for the treatment of gastric cancer. The combination of epirubicin, cisplatin and continuous infusion 5-FU (ECF) has been proposed as a standard first-line therapy for gastric cancer as a consequence of its significantly improved response rate (46%) and survival (8.7 months) when compared with FAMTX (Waters et al, 1999). More recently, ELF has been shown to provide better disease control (complete response (CR)+partial response (PR)+stable disease (SD)) for patients with proximal rather than distal tumours (85 vs 48%, P=0.04) (Schulze-Bergkamen et al, 2002). Tolerability, toxicity and ease of administration have become major determinants for selecting an appropriate therapy and ELF has emerged as a convenient, well-tolerated regimen that can be administered on an outpatient basis (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002). Irinotecan (CPT-11, Camptosar; Pfizer Oncology, New York, USA) inhibits topoisomerase I thereby disrupting DNA replication and cell division within tumour cells. Response rates between 20 and 23% have been reported for irinotecan monotherapy in untreated gastric cancer (Futatsuki et al, 1994; Kohne et al, 2003). In patients who had failed previous therapy, irinotecan (180 mg m−2) combined with 5-FU (400 mg m−2, bolus) and LV (125 mg m−2) followed by 5-FU infusion (1200 mg m−2 over 48 h) yielded a response rate of 29%, while a further 34% of patients achieved SD (Assersohn et al, 2004). Irinotecan with bolus-LV/5-FU in the first-line treatment of gastric cancer provided a response rate of 22% (Blanke et al, 2001). However, this regimen (irinotecan 125 mg m−2, LV 20 mg m−2 plus 5-FU 500 mg m−2, all given weekly for 4 weeks followed by a 2-week rest) was associated with a high incidence of severe diarrhoea (28%) and neutropenia (36%) infection leading to substantial dose modifications (Blanke et al, 2001). By comparison, the combination of irinotecan with continuous rather than bolus infusions of LV/5-FU exhibited a lower incidence of grade 3 and 4 toxicities in colorectal cancer patients (Douillard et al, 2000; Saltz et al, 2000; Bouzid et al, 2003). Therefore, we have investigated a weekly dose of irinotecan (80 mg m−2) in combination with LV (500 mg m−2) and continuous 5-FU (2000 mg m−2 over 24 h) according to the AIO (Arbeitsgemeinschaft Internistische Onkologie) regimen (i.e. ILF) in gastric cancer patients. In a previous phase I study of ILF in the first- and second-line treatment of gastric cancer, we observed a response rate of 20% with a further 36% of patients reporting SD (Moehler et al, 2003). Importantly, toxicity was sufficiently manageable to allow outpatient-based treatment. Therefore, we initiated the present randomised, controlled, phase II study to compare the efficacy and safety of ILF with ELF in the first-line treatment of metastatic gastric cancer.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nEligible patients had untreated histologically proven gastric adenocarcinoma, or adenocarcinoma of the oesophagogastric junction with measurable metastatic disease and/or locally recurrent nodal involvement, were aged between 18 and 75 years with a Karnofsky performance score (KPS) ⩾60 and a life expectancy >12 weeks. Patients were required to have adequate haematological (neutrophils ⩾2.0 × 109 l−1, platelets ⩾150 × 109 l−1; haemoglobin ⩾10g dl−1), hepatic (total bilirubin ⩽1.25 × upper normal limit (UNL); aspartate (AST) and alanine (ALT) aminotransferases ⩽3 × UNL) and renal function (creatinine <1.25 × UNL). Patients with previous cancer therapies were excluded from the study. All patients provided signed and dated consent before entering the trial. The study was conducted in accordance with the principles of the Declaration of Helsinki and Good Clinical Practice guidelines and the protocol was initially approved by the Ethics committee of Aerztekammer Rheinland-Pfalz and later by all Ethics committees responsible for participating centres.\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN AND RANDOMISATION:\nThis was an open-label, multicentre, phase II randomised trial with two treatment arms. Patients were randomly assigned and stratified according to centre, peritoneal involvement (yes/no) and prior gastrectomy (yes/no). The randomisation process was centralised and performed by the Coordination Centre for Clinical Trials (KKS), Mainz, Germany.\n\nBODY.PATIENTS AND METHODS.ADMINISTRATION OF STUDY DRUGS AND DOSE ADJUSTMENT:\nPatients assigned to ILF (Arm A) received irinotecan 80 mg m−2 intravenously (i.v.) over 60–90 min followed by LV 500 mg m−2 i.v. over 60 min and then 5-FU 2000 mg m−2 i.v. over 24 h, on day 1. Each cycle comprised six once-weekly treatments followed by a 13-day rest period. Systemic prophylactic atropine (0.25 mg) injections for irinotecan-related acute cholinergic symptoms were allowed for the first cycle but not recommended. Prophylactic treatment for delayed diarrhoea was not permitted. However, patients were carefully informed of the potential risk of delayed diarrhoea and neutropenia and the need for early intervention with loperamide, metoclopramide, antibiotics, or hospitalisation and parenteral rehydration in case of refractory diarrhoea (>48 h). Antiemetic treatment was performed using metoclopramide or HT-3 antagonists in a sequential manner. The prophylactic use of colony-stimulating factors was not permitted. Patients assigned to ELF (Arm B) received etoposide 120 mg m−2 i.v. over 60 min, LV 300 mg m−2 i.v. over 5–10 min and then 5-FU 500 mg m−2 bolus i.v. over 2–4 min, on day 1. Each cycle comprised three applications on consecutive days (1–3) followed by an 18-day rest. All study treatments were administered until disease progression, unacceptable toxicity or withdrawal of consent. In the event of toxicity (defined by the National Cancer Institute of Canada expanded common toxicity criteria; NCIC-CTC), treatment delays or dose reductions could be applied as follows. If at any time during a cycle there were moderate reductions in haematological function (neutrophil count 0.5–1.5 × 109 l−1, platelet count 25–75 × 109 l−1) or moderate diarrhoea or stomatitis (>grade 1), the next administration could be delayed for up to 2 weeks. If at any time haematological abnormalities were noted (neutrophils <0.5 × 109 l−1, neutrophils <1 × 109 l−1 with infection or fever, platelets <25 × 109 l−1) or if there were ⩾grade 3 or 4 diarrhoea or stomatitis, treatment had to be delayed until recovery to moderate levels (as described above) after which the following dose reductions were applied. For Arm A, 5-FU was reduced to 1600 mg m−2 and irinotecan to 65 mg m−2. For Arm B, in the case of haematological toxicity, etoposide had to be reduced to 100 mg m−2 and 5-FU to 400 mg m−2, and in the case of diarrhoea or stomatitis, 5-FU was reduced to 400 mg m−2. If a condition persisted despite dose reduction, or if a patient experienced myocardial infarction, treatment was terminated. In the case of hand–foot syndrome, the dose of 5-FU was to be reduced by 20%. Delayed diarrhoea was treated immediately with loperamide and rehydration and, if associated with severe neutropenia, a broad-spectrum antibiotic. Hospitalisation with i.v. rehydration was required for grade 4 or persistent (>48 h) diarrhoea, concomitant vomiting, fever, or KPS <60%.\n\nBODY.PATIENTS AND METHODS.STUDY EVALUATIONS:\nAt baseline up to five measurable lesions per organ and 10 lesions in total were to be identified as target lesions, measured using computed tomography (CT), and recorded according to the RECIST system (Response Evaluation Criteria In Solid Tumours; Therasse et al, 2000). The sum of the longest diameters for all target lesions was used as a reference for determining objective tumour response. Tumour responses were evaluated at week 7, week 14 and then every two cycles for patients receiving ILF or every four cycles for patients receiving ELF. Responses were determined according to RECIST as follows: complete response was defined as the disappearance of all target and nontarget lesions with no new lesions and confirmed by two observations at least 4 weeks apart; PR was defined as a reduction of 30% or more in the sums of the longest diameters of all measurable lesions relative to baseline with no new lesions; no change (NC) was defined as neither sufficient shrinkage to qualify for PR nor sufficient increase to qualify for progressive disease (PD) with no new lesions; and PD was defined as ⩾20% increase in the sum of the longest diameters, the occurrence of nontarget lesions (e.g. pleural effusion or ascites) or the appearance of brain metastases independently of performance at sites outside the brain. Safety and tolerability were assessed by regular clinical examinations and assessments of adverse events (weekly, at the end of treatment and at every 3 months of follow-up), disease symptoms, KPS, haematological and biochemical parameters.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS:\nThe primary end point was objective clinical response (CR+PR) based on an interim analysis following 14 weeks of treatment. The one-sided exact Fisher's test was used to compare the treatment arms at the significance level α=5%. The analyses were performed on an intention-to-treat basis including all patients who were treated in the study. The secondary end points were ORR (for the entire treatment period), time to progression, tumour growth control, time to treatment failure (including progression, death or withdrawal) and survival. Time to event data were described by Kaplan–Meier estimates and treatment groups were compared by log-rank test. Time to event data were further evaluated by appropriate proportional Cox's models and results were summarised by hazard ratio point and 95% confidence interval (CI) estimates, and P-values of Wald χ2 test. Binary data were described by risk ratio (RR) point and 95% CI estimates and treatment groups were compared by exact Fisher's test. Binary data were further evaluated by appropriate logistic regression models and were summarised by odds ratio point and 95% CI estimates and P-values of Wald χ2 test. If not specified otherwise, P-values are presented from two-sided tests and two-sided 95% CI are presented. All analyses were performed using SAS version 6.12.\n\nBODY.RESULTS.PATIENT CHARACTERISTICS:\nIn all, 120 patients from 17 centres in Germany were randomised into the study between November 2000 and April 2003. Two patients from Arm A and four from Arm B withdrew without receiving study treatment; therefore, the intention-to-treat population contained 114 patients (56 received ILF and 58 received ELF). The baseline characteristics were well balanced between the two treatment groups (Table 1). The median age of patients was 62 years and the median KPS was 90%. In around one-third (31%) of patients, the primary tumour site was the oesophagogastric junction, 62% of patients had liver metastases and in 77% two or more organs were involved.\n\nBODY.RESULTS.RESPONSE RATES:\nThe objective clinical response rates following 14 weeks of treatment (primary end point) were 30% for ILF compared with 17% for ELF (RR 0.57, 95% CI 0.29–1.13, P=0.0766). The ORRs for the entire treatment period and including all assessments prior to discontinuation were 43% (24 of the 56) for ILF and 24% (14 of the 58) for ELF (Table 2). The increased response rate provided by ILF compared with ELF was statistically significant (RR=0.56; 95% CI=0.33–0.97; P=0.0467.) The tumour control rates (CR+PR+NC) were 63% (35 of the 56) and 41% (24 of the 58), respectively. Logistic regression analysis indicated that a baseline KPS ⩽80% reduced the likelihood of a response by 59% compared with patients whose KPS was greater than 80% (P=0.038) (Table 3). After adjustment for KPS, peritoneal involvement and surgery for primary tumour, the regression model also demonstrated that ILF was 138% more likely to provide a response when compared with the ELF regimen (Table 3, P=0.042).\n\nBODY.RESULTS.PROGRESSION-FREE SURVIVAL, TREATMENT FAILURE AND OVERALL SURVIVAL:\nAt the last data cutoff, the median follow-up was 9.4 months in Arm A and 5.8 months in Arm B. At this time, 96 of the 114 patients had died. Disease progression was the major cause of death and accounted for 79% of patients in both treatment groups. One patient from the ILF arm did not comply with the provided recommendations for the treatment of prolonged grade 3 diarrhoea and consequently died (i.e. toxic death). One patient in the ELF arm died from a cardiovascular event. Compared with ELF, the ILF regimen extended median progression-free survival, median time to treatment failure and median overall survival (Table 4). However, when the treatment groups were compared by log-rank test, there was no significant difference between the two treatments for any of these parameters (e.g. the Kaplan–Meier survival plot as shown in Figure 1). Investigational analyses found that the risk of progression was increased in patients with a primary tumour in the oesophagogastric junction and in those with metastatic involvement in two or more organs (Table 3). As would be expected, the risk of death was increased in patients with a low KPS, in those with two or more involved organs and in those with peritoneal involvement who received ELF (Table 3).\n\nBODY.RESULTS.SAFETY:\nThe median number of cycles administered in the study was two for ILF (Arm A) and three for ELF (Arm B) (Table 5). Although the median treatment duration period was over twice as long with ILF than with ELF, there were more dose administration delays (70%) and dose reductions (75%) with ILF than with ELF (52 and 45%, respectively). The main reason for discontinuing study treatment was disease progression; 54% of patients receiving ILF and 72% receiving ELF. Although only one patient in each treatment group withdrew because of treatment-related toxicity, five patients receiving ILF and three patients receiving ELF either withdrew consent or refused further treatment. The incidence of grade 3/4 haematological toxicities was low in both treatment groups with the exception of neutropenia, which was reported by 57% of patients receiving ELF (Table 6). There were more grade 3/4 gastrointestinal toxic events with ILF, notably diarrhoea, which was reported by 18% of patients compared with no reports with ELF. Grade 3/4 alopecia was reported by a significant proportion of patients receiving ELF (28%), but was only seen in 5% of those receiving ILF.\n\nBODY.DISCUSSION:\nAlthough chemotherapy regimens offer at best a slight, albeit statistically significant, improvement in survival for patients with gastric cancer, they are associated with a degree of toxicity that limits their value as a palliative treatment (Vanhoefer et al, 2000; Schoffski, 2002; Diaz-Rubio, 2004). The primary end point of clinical response at 14 weeks was selected so that a statistical comparison at a fixed time point could be made. However, as it is the convention in such studies, patients were treated until progression, and could respond to treatment at a later point. Therefore, the overall response and survival rates obtained from the entire dosing period provide a more clinically significant assessment of the efficacy of these regimens for discussion in relation to other trials in gastric cancer. The irinotecan-based combination provided again a greater ORR than that seen with the commonly used ELF regimen (43 vs 24%, respectively, P=0.0467). Overall response rates for ELF reported in previous studies range from 9 to 23% (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002) and this compares well with the 24% response rate reported in this study. Accordingly, an ORR of nearly 50% for ILF, as seen in this study, is a substantial improvement and is in the range of previous reports of the use of this drug combination in this setting (Blanke et al, 2001; Moehler et al, 2003). The overall survival data in the present study also compare well with those from previous studies. The median overall survival with ELF has been reported at 7.2 and 8.0 months (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002), which is similar to both the 8.3 months reported here and the data reported for irinotecan-based regimens in the second-line setting between 7.0 and 7.6 months (Moehler et al, 2003; Assersohn et al, 2004). By comparison, there was a nonsignificant trend for increased median survival with ILF in this study (10.8 months) and this compares well with data reported for more recent exploratory combinations such as capecitabine and docetaxel (10.5 months), and epirubicin, docetaxel and cisplatin (11.0 months) (Lee et al, 2004; Park et al, 2004). The same can be said of the progression-free survival period in the ILF group of 4.5 months, which compared well with the 4.1 and 5.2 months reported recently for docetaxel-based regimens (Lee et al, 2004; Park et al, 2004). In other randomised phase II studies, continuous 5-FU/LV infusion plus irinotecan has also provided promising efficacy (ORRs of 40–42%, median progression-free survival periods of 6.5–6.9 months and median overall survival periods of 10.7–11.3 months; Bouche et al, 2004; Pozzo et al, 2004). Consequently, large phase III studies are being considered to investigate irinotecan in combination with continuous 5-FU/LV infusion regimens. When patient histories, disease status and other factors were examined for their effects on clinical outcome, those patients who were in better general health (good performance status, low tumour burden) were more likely to achieve a response and less likely to have a progression event or die, regardless of the treatment arm to which they were randomised. Patients with peritoneal involvement at presentation have a generally poorer prognosis and as a group face a desperate need for improved treatment options (Rau et al, 1996). The data from this study demonstrated that these patients are less likely to suffer a fatal event if treated with ILF rather than ELF. This is potentially an important observation for the management of these difficult to treat patients (Blot et al, 1991; Bray et al, 2002). The extension of meaningful survival remains a major objective for oncologists who must therefore consider the impact of treatment-related toxicity. Overall, the occurrence of the toxicities in this study was consistent with the safety profiles of irinotecan, etoposide and 5-FU/LV. The ILF combination was well tolerated with a low and acceptable incidence of haematological toxicity. Gastrointestinal toxicity is a recognised side effect of ILF therapy (Douillard et al, 2000; Saltz et al, 2000), which can require hospitalisation and urgent medical intervention (Rothenberg et al, 2001). The incidence of grade 3 or 4 diarrhoea in the current study was comparable to the previous data in gastric cancer (Blanke et al, 2001; Moehler et al, 2003; Pozzo et al, 2004). With close monitoring of the patient, suitable medication and rehydration, most cases of diarrhoea can be managed effectively and do not present a significant obstacle to the clinical use of ILF. The toxicity observed in our study was lower than that reported by Douillard et al in the pivotal European first-line trial where patients with colorectal cancer received weekly irinotecan (80 mg m−2) plus an AIO-based regimen of 24-h high-dose 5-FU (2300 mg m−2) preceded by 2-h LV 500 mg m−2, and grade 3/4 diarrhoea was reported by 44% of patients (Douillard et al, 2000). The lower toxicity in our study might be due to the lower daily doses of 5-FU (2000 mg m−2 administered over 24 h). Work is ongoing to identify those patients who carry a specific genetic polymorphism in one of the main enzymes (UGT1A1) involved in the detoxification of irinotecan and are therefore more susceptible to the side effects of irinotecan (Mathijssen et al, 2001). Such work will improve the targeting of this useful therapy and may allow appropriate prescriptive dosing schedules on an individual basis. The present study concurs with similar phase II studies in that the combination of irinotecan with continuous LV/5-FU (ILF) represents a potentially valuable new treatment option for metastatic gastric cancer and requires further evaluation (Bouche et al, 2004; Pozzo et al, 2004).\n\n**Question:** Compared to Etoposide with 5-fluorouracil plus leucovorin (ELF) what was the result of Irinotecan with high-dose 5-fluorouracil plus leucovorin (ILF) on Overall survival?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
397
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Incisional hernia after upper abdominal surgery: a randomised controlled trial of midline versus transverse incision\n\n ABSTRACT.OBJECTIVES:\nTo determine whether a transverse incision is an alternative to a midline incision in terms of incisional hernia incidence, surgical site infection, postoperative pain, hospital stay and cosmetics in cholecystectomy.\n\nABSTRACT.SUMMARY BACKGROUND DATA:\nIncisional hernias after midline incision are commonly underestimated but probably complicate between 2 and 20% of all abdominal wall closures. The midline incision is the preferred incision for surgery of the upper abdomen despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia. A RCT was preformed in the pre-laparoscopic cholecystectomy era the data of which were never published.\n\nABSTRACT.METHODS:\nOne hundred and fifty female patients were randomly allocated to cholecystectomy through midline or transverse incision. Early complications, the duration to discharge and the in-hospital use of analgesics was noted. Patients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications such as fistula, wound dehiscence and incisional hernia after a minimum of 12 months follow-up.\n\nABSTRACT.RESULTS:\nTwo percent (1/60) of patients that had undergone the procedure through a transverse incision presented with an incisional hernia as opposed to 14% (9/63) of patients from the midline incision group (P = 0.017). Transverse incisions were found to be significantly shorter than midline incisions and associated with more pleasing appearance. More patients having undergone a midline incision, reported pain on day one, two and three postoperatively than patients from the transverse group. The use of analgesics did not differ between the two groups.\n\nABSTRACT.CONCLUSIONS:\nIn light of our results a transverse incision should, if possible, be considered as the preferred incision in acute and elective surgery of the upper abdomen when laparoscopic surgery is not an option.\n\nBODY.INTRODUCTION:\nThe rate of incisional hernia after midline incision is commonly underestimated but probably lies between 2 and 20% [1–5]. Thus, incisional hernia is a major postoperative problem. The treatment of incisional hernia is complicated by high rates of recurrences. Recently, in a randomised controlled trial published by Burger et al. [6], midline incisional hernia repair has been shown to be associated with a 10-year cumulative recurrence rate of 63 and 32% for suture and mesh repair, respectively. The midline incision is the preferred incision for surgery of the upper abdomen, despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia [7]. Various approaches to opening the abdomen have been advocated over time. The choice for a certain incision is dependent on the exposure necessary for the desired procedure to succeed. A midline incision, be it supraumbilical, infraumbilical or both, is an approach especially suited for emergency and exploratory surgery because of the quick and generous exposure that can be achieved within a few minutes [8, 9]. The avascular nature of the linea alba minimises blood loss during this procedure. A supraumbilical transverse incision may be utilised in case exposure of the upper abdomen is desired. During this incision, the damage inflicted to the segmental arteries and nerves is previously described as being minimal [10]. Previously, only one randomised controlled trial, comparing transverse and true midline incisions, has been published specifically addressing incisional hernia incidence [11]. To determine whether the use of a transverse incision is an alternative to a midline incision for open cholecystectomy in terms of incisional hernia incidence, surgical site infection, postoperative pain and hospital stay, this randomised controlled trial was performed. This trial was conducted in an era when laparoscopic cholecystectomy was not yet available. The possibility of low incisional hernia rates after transverse incisions and the fact that little is known about potential advantages incited us to publish the relevant results of this randomised controlled trial which has been performed in the past and has only been reported in a Dutch thesis by one of the authors (H.L.). The primary endpoint of this study was the incisional hernia incidence after 12 months of follow-up. Secondary endpoints included pain and cosmetic appearance.\n\nBODY.METHODS.PROTOCOL:\nSome 150 consecutive female patients were randomly assigned to a midline or transverse incision as an approach for elective cholecystectomy or combined cholecystectomy and cholangiography (with or without consecutive choledochotomy) (75 and 75 patients, respectively). Emergency procedures were excluded from participation. The sample size is based on an incisional hernia rate reduction from 20 to 6% at a power of 80% and an error rate of 5%. Obtaining informed consent was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975. The investigation reported was performed with informed consent from all of the patients and followed the guidelines for experimental investigation with human subjects and was approved by the medical ethics committee. An independent statistician prepared closed, tamper-proof envelopes containing the random allocation (Fig. 1). Patients were randomised for one of the procedures in theatre through the opening of the envelopes.Fig. 1Flow chart of patient inclusion and follow-up Patient-related factors that were recorded were age, body mass and length and date of operation. Operation-related factors that were recorded were the exact nature of the operation, length of the incision, the thickness of the subcutaneous fat, surgeon performing the procedure, as well as the duration of the operation (skin-to-skin time). In the immediate postoperative period, the use, dose and type of analgesics was recorded and a pain score was administered. The use of analgesics (morphine 7.5 mg intra-muscular injection, 4 h minimum interval between consecutive injections) was monitored for 48 h after surgery; the pain score was administered for the first 6 days after surgery. In patients assigned to surgery through a midline incision, the skin was incised from just below the xyphoid process to just above the umbilicus. The abdominal wall was opened in the midline by incising the linea alba. A Collin type (two-bladed) self-retaining retractor was used to maintain exposure. The abdominal wall was closed in one layer using single polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). The skin was consequently closed using running monofilament nylon sutures (Ethilon; Ethicon, Amersfoort, The Netherlands). Patients randomised for a transverse incision received a right-sided unilateral transverse incision between 3 and 4 cm below the costal margin. The rectus muscle was incised. The fibres of the external and internal obliques and the transverse muscles were separated in the direction of their course. Exposure was achieved through the use of a manually held single-bladed retractor. Closure of the abdominal wall was achieved by closure of the peritoneum and the posterior rectus fascia using a continuous, polygalactin 910 suture (Vicryl; Ethicon, Amersfoort, The Netherlands). The anterior rectus sheath and the fascia of the internal and external transverses were closed using simple interrupted polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). Towards the end of both procedures, a Redon low-vacuum drain catheter was placed, which was guided outside the abdominal cavity approximately 5 cm from the incision. The skin was consequently closed using continuous monofilament nylon suture (Ethilon; Ethicon, Amersfoort, The Netherlands). All patients received a dose of 5,000 IU of sodium–heparin on the morning of the procedure as thrombosis prophylaxis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Pearson χ2 test was used for comparing percentages. In case of small expected numbers, a Fisher's exact test was performed. Continuous variables were analysed using the Mann–Whitney test. A P-value of 0.05 or less (two-sided) was considered to be statistically significant. Means and medians are expressed ±standard deviation (SD).\n\nBODY.METHODS.FOLLOW-UP:\nPatients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications, such as fistula, wound dehiscence and incisional hernia, after a minimum of 12 months follow-up. The patient and the surgeon evaluated the cosmetic results independently and were asked to rate the scar as unsatisfactory, satisfactory or fine. Furthermore, the length and width of the scar was measured.\n\nBODY.RESULTS.STUDY GROUP:\nSome 150 consecutive patients were randomised for participation in this study during an inclusion period from April 1977 until July 1979. Seventy-five patients received a transverse incision and 75 patients a midline incision (Fig. 1). One patient was withdrawn from further follow-up after developing peritonitis and consequent acute respiratory distress syndrome (ARDS) not related to the closure of the abdominal wall 2 days after surgery (transverse incision group). The patients' average age was 51.9 and 51.4 years for the midline and the transverse incision groups, respectively. Furthermore, no differences were found in the body mass and average length between the two groups (Table 1). A cholecystectomy was performed using a transverse incision in 52 patients and utilising a midline incision in 52 patients also. Fifteen and 16 patients, respectively, underwent a combined cholangiography/cholecystectomy. A further 7 and 6 patients, respectively, were treated with a cholangiography/cholecystectomy plus additional choledochotomy and the postexploratory placement of a T-tube.Table 1Baseline characteristics of the patients undergoing surgery, according to study groupVariableMidline incisionTransverse incisionn = 75n = 74Average age (years) ± SD51.9 ± 14.851.4 ± 13.8Average weight (kg) ± SD71.3 ± 14.568 ± 14.3Average length (cm) ± SD163.5 ± 7.8164 ± 7.3\n\nBODY.RESULTS.SURGEON:\nStaff surgeons performed 17% (13/75 patients) of all procedures performed through a midline incision. The remainder of the procedures through a midline incision was carried out under staff surgeon supervision. Staff surgeons performed 14% of all procedures in the transverse incisions study group (10/74 patients) and supervised the remainder. No statistically significant difference was found between the two randomised groups (P = 0.65).\n\nBODY.RESULTS.DURATION OF SURGERY:\nNo significant difference was noted in the skin-to-skin time (in min) for the two different incisions (Table 2). Surgery utilising midline and transverse incision took 56.9 ± 29.3 and 53.2 ± 26.8 min, respectively (P = 0.35). The total duration of the procedures until extubation (in min) did not differ between the midline and transverse incisions (71.0 ± 30.5 and 67.0 ± 27.3, respectively, P = 0.34).Table 2Length of incision, thickness of subcutaneous fat and skin-to-skin time, according to study groupVariableMidline incisionTransverse incisionP-valueLength of incision (mm) ± SDa164 ± 28140 ± 24<0.0001Thickness of subcutaneous fat (mm) ± SDa34.5 ± 13.030.3 ± 12.40.05Skin-to-skin time (min) ± SDa56.9 ± 29.353.2 ± 26.80.40Width of scar (mm) ± SDb8.3 ± 1.43.3 ± 1.2<0.0001aMeasured during surgery in 75 midline and 74 transverse incisionsbMeasured at follow-up in 63 and 60 midline and transverse incisions, respectively\n\nBODY.RESULTS.PAIN AND ANALGESICS:\nSignificantly more patients, having undergone a midline incision, reported pain on day one, two and three postoperatively (P < 0.0001, Table 3). In the midline incision group, 28/75 patients required no or only one dose of analgesics; the remainder required two or more doses. Thirty-one patients operated through a transverse incision required no analgesics or only one dose; 43 patients (the remainder) required two or more. No significant difference in the use of analgesics was found between the groups (P = 0.69).Table 3Postoperatively reported pain, according to study group, shown as the number of patients reporting pain at the time points indicated (percentage), with the remainder of patients reporting no painTime point after surgeryMidline incision n = 75Transverse incision n = 74P-valuePatients reporting pain, n (%)Patients reporting pain, n (%)3–4 h68 (91)60 (81)0.09First day64 (85)39 (53)<0.0001Second day57 (76)23 (31)<0.0001Third day28 (37)9 (12)<0.0001Fourth day5 (7)3 (4)0.72Fifth day0 (0)1 (1)0.50Sixth day0 (0)1 (1)0.50\n\nBODY.RESULTS.COMPLICATIONS:\nPostoperative complications (Table 4) were seen in 16 out of 75 patients (21%) from the midline incision group and in 15% from the transverse incision group (11 patients) (P = 0.30). Briefly, one patient in each group developed cardiac complications; 8 and 6 patients developed urinary retention after the midline and transverse incisions, respectively (P = 0.59). Surgical site infections were diagnosed in 7 and 3 patients, respectively (P = 0.33).Table 4Rate of complications after surgery, according to study group, shown as the number of patients diagnosed with complications (percentage)ComplicationMidline incisionTransverse incisionP-valuen = 75 n (%)n = 75 n (%)Cardiac1 (1)1 (1)1Urinary retention8 (12)6 (8)0.59ARDS01 (1)0.50Surgical site infection7 (9)3 (4)0.33Haemorrhage1 (1)00.50Pneumonia01 (1)0.50Total17 (23)12 (16)0.30\n\nBODY.RESULTS.DISCHARGE:\nForty-five (60%) and 42 (57%) patients from the patients having undergone a midline or a transverse incision, respectively, were discharged on day 6 or 7 postoperatively. The remaining patients from each group left hospital care on day 8 or later. The duration of hospital admission did not differ between the two types of incision (P = 0.74).\n\nBODY.RESULTS.COSMETICS:\nThe width and length of all incisions was measured during the follow-up visit (Table 2). The mean width of the scar after the healing of the midline incisions was found to be 8.3 ± 1.4 mm. The mean width of the scar after the healing of the transverse incisions was measured to be 3.3 ± 1.2 mm. This observed difference is significant (P < 0.0001). The length of the incisions was 140 ± 24 mm and 164 ± 28 mm for the transverse and the midline incisions, respectively. The difference in scar length was found to be significant (P < 0.0001).\n\nBODY.FOLLOW-UP:\nEighty-one percent of all patients operated through a transverse incision were seen during the follow-up examination (n = 60). Of the patients operated through a midline incision, 63 out of 75 were seen at the outpatient clinic (84%). The patients that were lost to follow-up could either not be traced or had deceased (Fig. 1). The minimum follow-up for the evaluation of cosmetic results and hernia incidence was 12 months and the maximum was 36 months.\n\nBODY.FOLLOW-UP.INCISIONAL HERNIA:\nFrom the patients that had undergone the procedure through a transverse incision, one (1/60; 2%) presented with an incisional hernia as opposed to 9 patients from the midline incision group (9/63; 14%); 95% confidence interval (CI) 7.5–25.4%. This difference in hernia incidence is significant (P = 0.017). No significant correlation was found between the incisional hernia rate and surgical site infection (P = 0.07).\n\nBODY.FOLLOW-UP.SUBJECTIVE COSMETICS:\nPatients and surgeons alike were asked to rate the appearance of the scar during the postoperative follow-up outpatient clinic visit. Both the surgeons and the patients found the scar resulting from the transverse incision to be more cosmetically pleasing (P < 0.0001 and P = 0.03, respectively, Table 5).Table 5Number of patients and surgeons rating the cosmetics of a scar at follow-upScoreMidline incision (n = 63)Transverse incision (n = 60)Patients, n (%)Surgeons, n (%)Patients, n (%)Surgeons, n (%)Unsatisfactory6 (10)25 (40)2 (3)6 (10)Satisfactory16 (25)27 (43)9 (15)12 (20)Fine41 (65)11 (17)49 (82)42 (70)Total63636060Difference between type of incision: patients P = 0.03; surgeons P < 0.0001\n\nBODY.DISCUSSION:\nThis prospective randomised study of transverse and midline incisions for open cholecystectomy shows that a significant reduction of incisional hernia incidence can be achieved through the use of a transverse incision. Only one other study (published in 1980) reported the incidence of incisional hernia after upper abdominal midline and unilateral transverse incision in a randomised trial. No difference between the two techniques (8 and 6% incisional hernia, respectively) was found, but the relatively short follow-up of 6 months, however, may be held accountable for this finding [11]. Three retrospective studies showed rates of incisional hernia of 3.2, 5.4 and 16.5% for midline incision and 1.3, 6.7 and 13.4% for transverse incision without statistically significant differences [12–14]. The possible reason for the rather high incidence of incisional hernia in the midline incision group (14%) may lie in the use of resorbable 910 polygalactin sutures. Nevertheless, the use of the same type of resorbable suture in the closure of the transverse incisions resulted in a 2% hernia rate. There is evidence for the importance of proper technique and choice of incision as a means to reduce incisional hernia being more important than the use of suture material [7]. Furthermore, as mentioned above, it is known that the incidence of incisional hernia in the case of a midline incision lies between 2 and 20%. From our data, the NNT (numbers needed to treat) is calculated to be 8 (95% CI 5–30) and the RRR (relative risk reduction) is 88% (95% CI 23–100%). Luijendijk et al. [15] have published a hernia rate of 2% after Pfannenstiel incisions closed using 910 polygalactin, which is in agreement with our findings in the patients randomised for a transverse incision, emphasising the importance of the incision over the choice of suture material. In our study, significantly fewer patients reported pain on day 1, 2 and 3 after transverse incisions, a result that was also described by other authors [16, 17]. Greenall et al. [18] published a contradictory report (in 1980) in which no significant difference in postoperative pain was found between midline and transverse incisions. The previously mentioned study, however, only analysed 46 out of 572 patients (8%) with regard to pain, which may explain the finding. In the same way, Lacy et al. suspended visual analogue pain scoring in a study comparing midline and transverse incisions for abdominal aortic surgery. Remarkably, the two groups in our study did not differ in terms of postoperative analgesia, a finding that is also reported by Lindgren et al. [17] and Donati et al. [19]. In our study, surgeons as well as patients were significantly more satisfied with the aesthetic appearance after a transverse in comparison with a midline incision. The scars after transverse incisions were found to be significantly shorter and less wide than the midline incisions, which may account for the observed difference. A possible reason for this is that a transverse incision is executed parallel to the prevailing direction of the skin lines on the abdomen and, therefore, the tension on the wound and consequent scar is low. Cholecystectomy has come a long way since this trial. The introduction and widespread acceptance of laparoscopic technique as the treatment of choice has rendered open cholecystectomy to be an operation for exceptional, and perhaps surgically difficult, circumstances. Nowadays, the study reported is hardly feasible, yet, the results are still applicable and very relevant for other surgical procedures in the (upper) abdomen. Knowledge of the favourable results of a transverse incision may aid surgeons in their choice when finding themselves in the unfortunate position of needing conversion to open cholecystectomy. In conclusion, this investigation on transverse incisions might be helpful in reducing the incidence of incisional hernia in patients after open cholecystectomy. The midline incision is a preferred manner to achieve exposure of the abdominal cavity and is considered to be easily performed and quick. Although the midline incision is generally accepted, the incidence of incisional hernias is surprisingly high [1–5]. The choice for a particular incision should not only be based on exposure, but also on hernia incidence reduction, especially since recurrence rates after hernia repair are reported to be very high. Furthermore, the recurrence rate after incisional hernia repair is a disappointing 63 and 32% for suture and mesh repair, respectively [6]. In the light of these results, incisional hernia prevention is warranted. In this investigation, it is shown that a significant reduction (from 14.5 to 1.7%) of incisional hernia incidence was achieved by using a transverse incision. Hence, a transverse incision should be considered as the preferred incision in acute and elective surgery of the upper abdomen in which laparoscopic surgery is not an option. Full exposure of two quadrants is feasible through the use of a unilateral transverse incision in, for example, biliary, bariatric, liver and colonic surgery. The transverse incision should be part of the abdominal surgeon's armamentarium and is a preferable incision to prevent the high incidence of incisional hernia after abdominal surgery.\n\n**Question:** Compared to Midline incision what was the result of Transverse incision on Procedure performed directly by the staff surgeon?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
233
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A prospective, contralateral comparison of photorefractive keratectomy (PRK) versus thin-flap LASIK: assessment of visual function\n\n ABSTRACT.PURPOSE:\nTo compare differences in visual acuity, contrast sensitivity, complications, and higher-order ocular aberrations (HOAs) in eyes with stable myopia undergoing either photo-refractive keratectomy (PRK) or thin-flap laser in situ keratomileusis (LASIK) (intended flap thickness of 90 μm) using the VISX Star S4 CustomVue excimer laser and the IntraLase FS60 femtosecond laser at 1, 3, and 6 months postoperatively.\n\nABSTRACT.METHODS:\nIn this prospective, masked, and randomized pilot study, refractive surgery was performed contralaterally on 52 eyes: 26 with PRK and 26 with thin-flap LASIK. Primary outcome measures were uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), contrast sensitivity, and complications.\n\nABSTRACT.RESULTS:\nAt 6 months, mean values for UDVA (logMAR) were −0.043 ± 0.668 and −0.061 ± 0.099 in the PRK and thin-flap LASIK groups, respectively (n = 25, P = 0.466). UDVA of 20/20 or better was achieved in 96% of eyes undergoing PRK and 92% of eyes undergoing thin-flap LASIK, whereas 20/15 vision or better was achieved in 73% of eyes undergoing PRK and 72% of eyes undergoing thin-flap LASIK (P > 0.600). Significant differences were not found between treatment groups in contrast sensitivity (P ≥ 0.156) or CDVA (P = 0.800) at postoperative 6 months. Types of complications differed between groups, notably 35% of eyes in the thin-flap LASIK group experiencing complications, including microstriae and 2 flap tears.\n\nABSTRACT.CONCLUSION:\nUnder well-controlled surgical conditions, PRK and thin-flap LASIK refractive surgeries achieve similar results in visual acuity, contrast sensitivity, and induction of HOAs, with differences in experienced complications.\n\nBODY.INTRODUCTION:\nRefractive surgery is one of the most commonly performed elective procedures and will likely maintain its popularity as ablation techniques become more refined and understanding of corneal wound healing improves. Two of the most common methods of refractive surgery are photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK). The rapid improvement in vision and lack of postoperative pain associated with LASIK has made this the preferred option with patients compared with PRK, which has greater postoperative discomfort and prolonged recovery of visual acuity.1 Recently, there has been renewed interest in PRK because of increasing concerns of complications associated with LASIK flap creation, including dry eye, corneal ectasia, and flap tears.2–5 Thin-flap LASIK attempts to gain benefits of both techniques by creating a flap of between 80 and 90 μm.6–8 Use of a thinner flap results in a more biomechanically stable cornea and decreases incidence of ectasia given the thicker residual stroma.3,9 Cutting a thinner LASIK flap is less invasive to the nerves within the corneal stroma, decreasing the severity and duration of dry eye, possibly by preserving corneal sensation and blinking rate.10–14 Flap creation avoids corneal epithelium removal, allowing reduced healing time and less haze and scarring.15 The present contralateral study compares the outcomes of eyes that have undergone PRK or thin-flap LASIK using the VISX STAR S4 excimer laser (VISX Incorporated, Santa Clara, CA), with flaps created with intended thicknesses of 90 μm using the IntraLase FS60 femtosecond laser (Abbott Medical Optics [AMO], Santa Ana, CA).\n\nBODY.METHODS:\nData from myopic eyes were analyzed, with or without astigmatism, in which the dominant eye was randomized (Research Randomizer software – Urbaniak, www.randomizer.org) to PRK or thin-flap LASIK (90 μm flap) and the nondominant eye underwent the alternative treatment. All PRK and thin-flap LASIK treatments were performed using the VISX Star S4 CustomVue laser at the John A. Moran Eye Center, Salt Lake City, Utah, between February 2008 and July 2009. All surgeries were overseen by two surgeons (M.M., M.D.M.). The research protocol was approved by the University of Utah Hospital Institutional Review Board. All patients included in this study met the US Food and Drug Administration guidelines for VISX CustomVue LASIK. Mean age of patient, 13 men and 13 women, was 30.8 years (range: 23–46). Twenty-six patients (52 eyes) with stable myopia (1.5–8.5 diopters [D]) and astigmatism (0.242–3.11 D) were enrolled in the study. Eleven patients excluded from this study had clinically significant lens opacities, previous corneal or intraocular surgery, keratoconus, unstable refraction, autoimmune disease, immunosuppressive therapy, or were pregnant or breastfeeding. Correction was made for distance and patients desiring monovision correction were excluded. Contact lenses were discontinued 2 weeks prior to screening for soft contact lens wearers and 6 weeks prior to screening for rigid gas permeable lens wearers. All patients had a preoperative examination including assessment of uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), tonometry, slitlamp examination of the anterior segment, and dilated fundus examination. Manifest and cycloplegic refractions were repeated on 2 separate visits to ensure reliability and stability. Corneal topography and thickness were measured using the Orbscan II v.3.0 (Bausch and Lomb, Rochester, NY). All eyes received 5 preoperative wavefront analyses with the VISX CustomVue WaveScan aberrometer v.3.62 (Fourier) (AMO), without pharmacologic intervention, under mesopic conditions, with a minimum pupil diameter of 6.0 mm. The contralateral study design was made so that each eye could act as a control for the fellow eye in each patient, allowing for study groups to be well matched. There were no violations in the randomization; all patients were analyzed as originally assigned. The randomization protocol was generated before the trial and known only to the study coordinator. In all patients, the emmetropic correction target was based on manifest refraction and wavefront analysis. All flaps were created with the IntraLase FS60 femtosecond laser at 60 kHz in a raster pattern with bed energy of 1.15 μJ, side-cut energy of 2.00 μJ, and pocket enabled. The flaps were created with an intended thickness of 90 μm, diameter of 8.4 to 9.0 mm, superior hinge angle of 55°, and a side-cut angle of 70°. Intraoperative pachymetry or optical coherence tomography were not performed to evaluate actual flap thicknesses. If the 8.0 mm maximum intended ablation diameter exceeded the flap diameter, the hinge and flap were shielded during ablation. Postoperatively, each eye undergoing thin-flap LASIK received 1 drop of gatifloxacin 0.3% (Zymar; Allergan Inc, Irvine, CA), prednisolone acetate 1% (Pred Forte, Allergan Inc), ketorolac tromethamine 0.4% (Acular LS, Allergan Inc.), and a bandage soft contact lens (Softlens Plano T, Bausch and Lomb, Rochester, NY). The prednisolone acetate was continued hourly during the first preoperative day and 4 times daily for an additional 6 days. The gatifloxacin was continued 4 times daily for 1 week. In eyes undergoing PRK all eyes had their corneas cooled with 15 mL of BSS (2.8–3.9°C) immediately following ablation. This was followed by 1 drop of a gatifloxacin 0.3% (Zymar), prednisolone acetate 1% (Pred Forte), ketorolac tromethamine 0.4% (Acular LS) and a bandage soft contact lens (Softlens Plano T). Ketorolac tromethamine was continued 4 times a day for 3 days and then discontinued. Gatifloxacin and prednisolone acetate were continued 4 times a day for 1 week with a subsequent steroid taper over 2 to 3 months per surgeon preference. Mitomycin C was not administered to any patient in the study at any time. Both bandage soft contact lenses were removed simultaneously once re-epithelialization was complete, typically on postoperative days 3 to 5. Patients were seen 1 day, 1 week, 1 month ± 10 days, 3 months ±14 days, and 6 months ±14 days. At all follow-up examinations, UDVA and CDVA were tested using a standard Snellen eye chart. Visual acuity was recorded in both Snellen notation and logarithm of the minimum angle of resolution (logMAR) format. Contrast sensitivity was measured in controlled mesopic conditions at 3, 6, 12, and 18 cycles per degree (cpd) using the Vectorvision CSV-1000E chart (Vectorvision, Greenville, OH). Higher-order aberrations (HOAs), including coma Z(3,1), trefoil Z(3,3), and spherical aberration Z(4,0), were measured using the CustomVue WaveScan at a mean diameter of 6 mm. Undilated scans of both eyes were taken preoperatively and 1, 3, and 6 months postoperatively. Primary outcome measures were UDVA, CDVA, contrast sensitivity, and complications. HOAs were measured and trended within groups as secondary measures. After the study was completed, the results were compiled and the data unmasked for statistical analysis. Refractive error, visual acuity, and HOAs were treated as continuous variables and analyzed for significance by independent t-tests. In all tests, P values <0.05 were considered statistically significant. Data analysis was done using Microsoft Excel (Microsoft Corp, Redmond, WA).\n\nBODY.RESULTS:\nMean preoperative measurements of UDVA, CDVA, sphere, and cylinder are shown in Table 1. 25 of 26 patients (50 eyes) completed the study at postoperative 6 months. One eye in the thin-flap LASIK group required PRK retreatment following a flap tear and both eyes from this patient were therefore removed from analysis of visual acuity, contrast sensitivity, and HOAs as the retreatment prevented the ability to distinguish results between the 2 surgical methods. The eyes from this patient were still included in the analysis of complications.\n\nBODY.RESULTS.VISUAL ACUITY:\nTable 2 shows visual acuity outcomes at 1, 3, and 6 months postoperatively. Statistically significant differences were found between PRK and thin-flap LASIK in UDVA at 1 month postoperatively, with thin-flap LASIK eyes showing more improvement in UDVA. Visual acuities were not statistically different between the groups at 3 or 6 months.\n\nBODY.RESULTS.STABILITY, EFFICACY, AND PREDICTABILITY:\nTable 3 shows stability, efficacy, and predictability outcomes postoperatively at 1, 3, and 6 months. CDVA was statistically different between groups at 1 month, with 24% of the PRK group losing a line or more from preoperative values, while 9% of eyes in the thin-flap LASIK group lost only 1 line at 1 month. No eyes in the thin-flap LASIK group lost more than 1 line. Also, 39% of eyes in the thin-flap group gained a line by 1 month compared with only 12% of eyes in the PRK group. At 6 months 64% and 56% of eyes had gained a line or more of CDVA in the PRK and thin-flap LASIK groups, respectively (P = 0.462).\n\nBODY.RESULTS.CONTRAST SENSITIVITY:\nContrast sensitivity measurements at 3, 6, 12, and 18 cycles per degree (cpd) in each group are shown in Figure 1. There were no differences between groups at any cpd at any time in the study (P ≥ 0.156). The thin-flap LASIK group showed no change in contrast sensitivity postoperatively (P > 0.131), while patients in the PRK group had a slight decrease in contrast sensitivity at 1 month seen at 3 and 12 cpd (P = 0.004) and (P = 0.025), respectively. At 6 months contrast sensitivity in the PRK group was still significantly decreased from baseline at 3 cpd (P = 0.013), although it did not reach a statistically significant difference at 3 months (P = 0.101).\n\nBODY.RESULTS.COMPLICATIONS:\nTypes of complications differed between the 2 groups. In the PRK group, 2 cases of epithelial defects occurred by 1 week, but had completely resolved by 6 months. Three eyes in the PRK group had mild haze appearing as early as 1 week postoperatively. Haze remained in only 1 eye at 6 months, but was classified as minimal and had no effect on UDVA or CDVA. Nine eyes (35%) in the thin-flap LASIK group experienced complications. In the thin-flap LASIK group, flap debris (1 eye), diffuse lamellar keratitis (DLK, 1 eye), and an epithelial cyst at the edge of 1 flap were observed, with no loss of UDVA or CDVA, and all resolved by 6 months. Microstriae were observed in 6 eyes, one of which was the eye described above with flap debris and the other was the eye with DLK, with no associated loss of UDVA or CDVA, with epithelial proliferation noted as filling the microstria and making them less apparent. Two eyes in the thin-flap LASIK group experienced flap tears intraoperatively – one resulting in mild flap edge scarring by 6 months that had no significant effect on visual function, and the other case affecting vision at 1 month postoperatively which was retreated with PRK at 3 months. As a result of the retreatment with the counter surgical technique, the ability to accurately compare visual acuity, contrast sensitivity, and HOAs between the 2 surgical methods was limited and both eyes from this patient were removed from analysis of these measures, but were still included in the analysis of complications.\n\nBODY.RESULTS.HIGHER-ORDER ABERRATIONS:\nAt postoperative 1, 3, and 6 months, 24 (96%), 25 (100%), and 24 (96%) eyes, respectively, in each group completed CustomVue WaveScan analysis. Total root-mean square (RMS) HOAs, coma, trefoil, and spherical aberrations are compared in Figure 2. There were no significant differences between groups in any HOAs throughout the study (P ≥ 0.101), with all P values at 6 months ≥0.63. In both groups, total HOAs (P < 0.008), spherical (P < 0.002), and coma (P = 0.008 at 3 months; P = 0.024 at 6 months) aberrations were significantly increased compared with preoperative conditions. Trefoil showed no significant change throughout the study in either group (P = 0.298).\n\nBODY.DISCUSSION/CONCLUSION:\nThe present study confirms that PRK and thin-flap LASIK are effective surgeries for the correction of low to moderate myopia. Although thin-flap LASIK showed superior visual results in the early postoperative period there was no statistically significant difference in outcomes of UDVA, CDVA, contrast sensitivity, or total RMS HOAs between PRK and thin-flap LASIK by 6 months. In a similar study comparing PRK and thin-flap LASIK, Slade et al also found that UDVA results were better in the thin-flap group early on and equalized by 6 months.16 Our study showed a similar trend, with no significant differences in any of the primary outcomes at 6 months, and with no difference in UDVA at 3 months. Visual regression in our study was similar to outcomes in Slade's study in which 42% of the PRK group lost a line or more of CDVA and 22% of the thin-flap LASIK group lost 1 line at 1 month postoperatively. Despite the use of custom ablation, postoperative increases in total HOAs, sphere, and coma were noted in our study, as also seen by Slade et al, although they noted that the increase in sphere and coma aberrations was significantly higher in the PRK group at 1 and 3 months postoperatively. As found in previous studies, there was no significant change found in trefoil at any time postoperatively.17,18 Our study showed no difference in induction of HOAs between groups at any time. Although increases in HOAs after refractive surgery have been correlated with decreases in contrast sensitivity in other studies, we demonstrate that increases in total RMS, sphere, and coma were seen postoperatively in both groups without a reliable decrease in contrast sensitivity.19,20 Slade's group found that contrast sensitivity was better in the thin-flap group at all postoperative points in the study, which may have been related to their finding of lower induction of sphere and coma aberrations in the thin-flap group compared with the PRK group. The authors recognize that the Slade study had a larger population size (n = 50 per group) and would have increased power to detect significant differences. Our study would have had increased power of analysis with a similar study group size, but results from analysis of HOAs would not likely change as P values for all HOAs at 6 months were ≥0.63. It would be difficult to make any such correlation between contrast sensitivity and HOAs from the results of this study. A loss of CDVA has been associated with the development of corneal haze in other studies, but as mentioned above none of the patients with visual regression developed haze.21–23 Findings in other studies showing that the biomechanics of eyes that have received thin-flap LASIK treatment are indistinguishable from those of PRK have led to suggestions that thin-flap LASIK is the best approach to LASIK.16 Although the present study did not find any statistically significant differences between thin-flap LASIK and PRK in terms of visual quality at 6 months, complications dealing with flap integrity in the thin-flap LASIK group were present which are not complications found in PRK. Although PRK remains a viable option for those unable to undergo LASIK, the use of thinner flaps may eliminate some of the complications seen with traditional LASIK. Larger studies are needed to better compare the complication rates of both methods and to determine how effective thin-flap LASIK will be in achieving the benefits of PRK and LASIK while avoiding the risks associated with each method. While thinner LASIK flaps attempt to preserve the biomechanical stability of the corneal stroma, at the same time, the flap itself becomes less stable, as was noted with the 2 flap tears and other complications occurring in the thin-flap LASIK group in this study. A study by Espandar and Meyer24 showed that most complications in flaps created by IntraLase femtosecond laser occurred at the hinge, which is where the 2 flap tears that occurred in this study. A thinner flap hinge would be biomechanically less stable and would increase the likelihood of intraoperative flap tear occurrence as well. Six of the 9 eyes with complications in the thin-flap LASIK group had microstriae, which are caused by the flattening of a weak corneal flap unable to maintain its curvature over the small area of stroma removed during ablation. The biomechanics of the flap and hinge, however, cannot be evaluated by the design of this study as analysis was done based on intended flap thickness, which has been shown to vary with the IntraLase FS60 femtosecond laser.25 The study could have been strengthened had intraoperative pachymetry or OCT been performed. Creating a flap with increased integrity would help prevent microstriae from forming and would also provide for a stronger hinge that would be less susceptible to flap tear. Possible ways to optimize flap integrity include modification of hinge and side-cut angle creation, as well as improved planarity and microdisruption of flap edges. This will allow improved adhesion of the flap to the underlying stroma. Continued improvements in laser technology may allow for safer creation of thinner flaps, helping to provide evidence for superior outcomes in thin-flap LASIK, permitting the biomechanical stability of PRK with the visual recovery of LASIK. Custom flap formation that minimizes weak areas susceptible to tearing will be helpful in achieving this difficult balance between corneal and flap integrity.\n\n**Question:** Compared to Thin-flap laser in situ keratomileusis (LASIK) what was the result of Photo-refractive keratectomy (PRK) on Corrected distance visual acuity at 6 months?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
254
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Impact and Process Evaluation of Integrated Community and Clinic-Based HIV-1 Control: A Cluster-Randomised Trial in Eastern Zimbabwe\n\n ABSTRACT.BACKGROUND:\nHIV-1 control in sub-Saharan Africa requires cost-effective and sustainable programmes that promote behaviour change and reduce cofactor sexually transmitted infections (STIs) at the population and individual levels.\n\nABSTRACT.METHODS AND FINDINGS:\nWe measured the feasibility of community-based peer education, free condom distribution, income-generating projects, and clinic-based STI treatment and counselling services and evaluated their impact on the incidence of HIV-1 measured over a 3-y period in a cluster-randomised controlled trial in eastern Zimbabwe. Analysis of primary outcomes was on an intention-to-treat basis. The income-generating projects proved impossible to implement in the prevailing economic climate. Despite greater programme activity and knowledge in the intervention communities, the incidence rate ratio of HIV-1 was 1.27 (95% confidence interval [CI] 0.92–1.75) compared to the control communities. No evidence was found for reduced incidence of self-reported STI symptoms or high-risk sexual behaviour in the intervention communities. Males who attended programme meetings had lower HIV-1 incidence (incidence rate ratio 0.48, 95% CI 0.24–0.98), and fewer men who attended programme meetings reported unprotected sex with casual partners (odds ratio 0.45, 95% CI 0.28–0.75). More male STI patients in the intervention communities reported cessation of symptoms (odds ratio 2.49, 95% CI 1.21–5.12).\n\nABSTRACT.CONCLUSIONS:\nIntegrated peer education, condom distribution, and syndromic STI management did not reduce population-level HIV-1 incidence in a declining epidemic, despite reducing HIV-1 incidence in the immediate male target group. Our results highlight the need to assess the community-level impact of interventions that are effective amongst targeted population sub-groups.\n\nBODY.INTRODUCTION:\nHIV-1–prevalence declines may now be occurring in some sub-Saharan African countries [1]. However, there remains little direct evidence that prevention measures—rather than natural HIV-1 epidemic dynamics [2] or behaviour change prompted by mortality [3]—have contributed to the slowing of HIV-1 epidemics [4,5]. Syndromic management of sexually transmitted infections (STIs) proved effective early in an HIV-1 epidemic in north-west Tanzania [6]. Peer education to promote safe behaviours showed promise in early process evaluations [7], but a randomised controlled trial (RCT) of factory workers in Harare, Zimbabwe, done in the mid-1990s, proved inconclusive [8]. Subsequent RCTs of syndromic management [9] and mass treatment of STIs [10], together with an information, education, and communication (IEC) behaviour-change programme [9], showed no effect in more mature epidemics. Integrated implementation of synergistic community-based HIV-1 control strategies could be a more cost-effective and sustainable approach to HIV-1 prevention than parallel application of vertical (top-down) programmes [11]. One scientific evaluation of such a strategy has been reported in which a combination of IEC activities amongst the general population and syndromic STI management showed no impact on HIV-1 incidence at the population level [9], although participation in the IEC activities was associated with reduced HIV-1 infection in women [12]. We conducted a cluster-RCT to test the hypothesis that integrated implementation of combined community- and clinic-based HIV-1 prevention, in which IEC activities focus primarily on high-risk populations, can be feasible and effective in reducing HIV-1 incidence in a major maturing epidemic in eastern Zimbabwe (Protocols S1 and S2; Text S1 and S2).\n\nBODY.METHODS.PARTICIPANTS AND RANDOMISATION PROCEDURE:\nThe study communities comprised six pairs of communities matched by socio-economic type—small town, tea/coffee estate, forestry plantation, roadside trading settlement, and subsistence farming area (two pairs) (Figure 1). Each community included at least one Government or Mission health centre. It was anticipated that HIV-1 incidence would be similar within each pair of communities. Within each pair, one community was assigned at random (un-blinded coin toss by a Ministry of Health official witnessed by programme and research personnel) to receive the additional intervention and the other to be the control. These procedures were designed to ensure that Mission, non-governmental organisation, and private sector programmes (for details, please refer to the following section) would be distributed evenly between intervention and control sites. Figure 1Location of Intervention and Control Communities in Manicaland Province, Eastern Zimbabwe We assessed the effect of the intervention using results from laboratory tests for HIV-1 infection and questionnaire data collected in the baseline and 3-y follow-up rounds of a population-based, closed-cohort survey. The 12 study communities were enumerated in a phased manner, with paired communities being enumerated consecutively to minimise the effects of any seasonal factors. HIV-1–prevention activities were commenced in each intervention community shortly after completion of the baseline survey in that community. In each community, individuals eligible for the study were identified in the first round using data from household listings prepared in an initial census. All males and females aged 17–54 y and 15–44 y at last birthday (the age groups expected to have the highest incidence of HIV infection), respectively, who had slept in a household in the community for at least four nights in the previous month, and who had also done so at the same time 1 y earlier, were considered eligible for the study. In heterosexually driven HIV-1 epidemics, risk of infection can be correlated amongst marital partners [13]. Therefore, to maximise statistical power to detect differences in HIV-1 incidence, enrolment was restricted to one randomly selected member per marital group.\n\nBODY.METHODS.INTERVENTIONS:\nIntervention and control communities were to receive standard Government services including basic syndromic STI management, condom distribution from health clinics and Zimbabwe National Family Planning Council outlets, home-based care, and limited HIV/AIDS–focussed IEC activities (e.g., occasional AIDS-awareness meetings and distribution of posters and leaflets). In addition, social marketing of male and female condoms would be provided through an ongoing national programme [14]. The intervention comprised targeted and population-level strategies to promote safer sexual behaviour and to improve treatment of STIs that facilitate HIV-1 transmission. The intervention strategies were implemented by two local non-governmental organisations (Family AIDS Caring Trust and the Biomedical Research and Training Institute) and the Zimbabwe Ministry of Health and Child Welfare through an integrated programme of community- and clinic-based activities. Integration of the individual programme components was achieved through the joint involvement of the participating agencies in the planning and implementation of activities and through the inclusion of biomedical and behavioural aspects within each component. The programme design comprised three key components: (1) peer education and condom distribution amongst commercial sex workers and male clients at workplaces and in the general community, supported by income-generating projects; (2) strengthened syndromic management of STI services at local health centres; and (3) open days with HIV/AIDS IEC activities at health centres to promote safer sexual behaviour and to increase the uptake of local STI treatment services. The peer-education component was based on a model which had been developed by the Project Support Group at the University of Zimbabwe [7] and which had been widely implemented within Zimbabwe and neighbouring countries. Activities were held weekly at workplaces and at locations within the general community (e.g., beer halls and markets) where casual relationships were most frequently formed [15]. The target population comprised sex workers and male clients who form a bridge population in HIV transmission [16] between sex workers and the monogamous (or serial monogamous) majority of women [17,18]. It was posited that the high HIV-1 incidence observed amongst young women could be reduced by altering the behaviour of their older male partners whose own behaviour was intrinsically more risky [19]. The behavioural component would be reinforced in counselling sessions with STI patients and through micro-credit income-generating projects to reduce unmarried women's dependence on commercial sex work. The micro-credit scheme consisted of small interest-free loans repayable over 10 mo, provided to groups and to individuals together with training in small-business management. The targeted activities would be extended to the general population through open days held at local health centres. Besides providing basic HIV/AIDS information, it was envisaged that programme meetings and activities, by their continuous nature, would sustain high levels of awareness of the risks of HIV transmission and would facilitate renegotiation of community social norms, making safer behaviours easier to adopt. The key messages of the programme were: (1) remain faithful to one regular sexual partner; (2) use condoms consistently with any casual sexual partners; and (3) seek prompt and effective treatment for any STIs. Syndromic management of STIs at primary healthcare centres was first introduced in Zimbabwe in the 1980s [20] and formed the basis of STI diagnosis and treatment services at baseline in the intervention and control communities. It was envisaged that these services could be strengthened and made more effective through a programme of regular classroom training and on-site supervision of nursing staff, through the introduction of training in systemic counselling for STI patients, and through the provision of small quantities of treatment drugs to cover delays in routine supplies. Quality-assurance procedures applied in the intervention communities included pre- and post-training tests for peer educators and, for nursing staff, attending the syndromic STI management and systemic counselling courses, regular on-site supervision (including random spot checks) and training, refresher courses, routine planning meetings and monitoring of service statistics, and quarterly workshops where detailed programme procedures were reviewed and updated. An interim qualitative process evaluation of intervention activities was conducted during the inter-survey period, and a report on the findings was provided to the implementing organisations.\n\nBODY.METHODS.OUTCOME AND PROCESS MEASURES:\nThe primary outcome of the study was HIV-1 incidence at the community level amongst individuals who were uninfected at baseline. Blood was collected onto Whatman No. 3 filter paper and transported to the Biomedical Research and Training Institute laboratory in Harare. Blood spots were air dried at 4 °C and, for long-term (>1 mo) storage, were kept at −20 °C. For baseline studies, blood was eluted into phosphate-buffered saline, and antibodies to HIV were detected using a dipstick dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, http://www.path.org; produced locally in Thailand]) and a standard protocol [21,22]. All positive results and a 10% sample of negative results were confirmed using a plate EIA (Abbott Third-Generation HIV-1/HIV-2 EIA [http://www.abbott.com] or Genelavia MIXT HIV-1/HIV-2 [Sanofi Diagnostics Pasteur, Marnes La Coquette, France]). At follow-up, a similar protocol was followed. Only the samples from those participants recorded as being HIV seronegative at baseline were tested at follow-up, again using a dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, produced locally in India]). Where seroconversion was indicated, the frozen stored baseline sample was retested to confirm the original negative result using the same dot EIA test. Where the baseline result remained negative, the Abbott EIA test was used to confirm both baseline and follow-up results. The change in place of manufacture of the dot EIA and the exclusive use of Abbott test kits to confirm positive sera at follow-up was due only to changes in the supply of test reagents, and not to perceived changes in sensitivity or specificity [23]. Apart from the principal investigators (based in Harare, London and Oxford) and those nurses given permission by participants requesting voluntary counselling and testing (VCT), all research personnel remained blind to the HIV-1 status of individual participants. Secondary outcomes, measured at the community and individual level, were self-reported genital ulcers and urethral or vaginal discharge in the past year (STI cases), STI treatment effectiveness (self-reported cessation of symptoms), indicators of sexual and health-seeking behaviour change, and HIV/AIDS knowledge. The behaviour-change variables assessed were sexual debut, sexual partner change in the past year, non-regular partnerships in the past month, and unprotected sex with regular and casual partners in the past 3 y. The data on sexual partnerships and condom use were collected using the Informal Confidential Voting Interview method for 75% of respondents selected at random in the first round of the survey. This method includes procedures to build rapport, ensure a non-judgemental interview approach, and provide reassurance that there are no right or wrong answers to questions of a personal nature, and uses a simple secret voting-box system to reduce embarrassment and guarantee confidentiality in low-development settings [18]. Its use has been shown to be associated with greater disclosure of socially proscribed behaviour in the study population [24]. Process indicators examined comprised changes in knowledge and psychosocial status and indicators of programme coverage and quality.\n\nBODY.METHODS.SAMPLE-SIZE CALCULATIONS:\nInitial sample-size calculations assumed 20% HIV-1 prevalence at baseline, 30% loss to follow-up after 2 y, and 80% power to detect a 40% reduction in HIV-1 incidence in the intervention communities compared with control communities, assuming a background yearly incidence of 2%. Based on six pairs of communities and a co-efficient of variation between communities of 0.15, the required sample size in each community was 1,000. Funding constraints and slower than anticipated implementation of intervention activities led to revisions of the sample size for each community to 800 and the length of follow-up to 3 y, respectively. Assuming a proportionate increase in loss to follow-up to 41%, these arrangements also yielded 80% power to detect a 40% reduction in HIV-1 incidence.\n\nBODY.METHODS.STATISTICAL METHODS:\nTo test the randomisation with small numbers of communities, HIV-1 prevalence, STI history, and socio-demographic characteristics were compared at baseline for study participants in the intervention and control communities, together with uptake of STI treatment and VCT services offered at baseline. Outcome and process indicators were compared for intervention versus control communities. Analysis of the primary outcome was on an intention-to-treat basis. Incident events and person-years at risk of seroconversion were used to calculate HIV-1 incidence rates and unadjusted and adjusted incidence rate ratios (IRR) with 95% confidence intervals (CIs) for each pair of communities. Adjustment was made for sex, 3-y age group, and community-level baseline HIV prevalence. The overall IRRs (unadjusted and adjusted) were taken to be the geometric means of the IRRs for the six pairs of communities. We calculated 95% CIs for each geometric mean as geometric mean ± 1.96 × standard error of the geometric mean. Paired student t-tests on the logarithms of the pair-specific IRRs were used to test whether these differed significantly from unity [25]. The coefficient of variation between communities was calculated based on baseline HIV prevalence using a standard procedure for pair-matched studies [26]. Analyses of prevalence for secondary outcome and process variables were conducted separately for male and female respondents seen at both survey rounds by fitting logistic regression models to the individual-level data and adjusting for community pair and, where available, value of variable at baseline. Since most programme activities were targeted and overall coverage of programme activities was therefore limited, sub-group analyses, adjusted for community pair, were done for HIV-1 incidence and behavioural outcomes to assess the individual-level effects of attendance at programme meetings. Data were entered and validated using SPSS-PC (http://calcnet.mth.cmich.edu/org/spss/index.htm) and data analysis was conducted in Stata version 7 (http://www.stata.com). Statistical tests were double-sided and results were taken to be significant at the 5% level.\n\nBODY.METHODS.ETHICAL APPROVAL:\nAll study participants in the intervention and control communities were offered free VCT for HIV-1, an information sheet on HIV/AIDS, results from a diagnostic test for Trichomonas vaginalis [27] (done at baseline only), and free treatment for T. vaginalis and other STIs from a research nurse. Testing and treatment for T. vaginalis was provided because the prevalence of other curable STIs was low in the study areas [22]. Antibodies reactive with T. vaginalis were detected in DBS eluates following a previously described procedure [27,28]. Written informed consent was sought as a condition of enrolment and continuation in the study. Prior ethical approval was obtained from the Research Council of Zimbabwe, number 02187; the Applied and Qualitative Research Ethics Committee in Oxford, United Kingdom, N97.039; and the UNAIDS Research Ethics Committee, ERC 98/03.\n\nBODY.RESULTS.PARTICIPANT FLOW:\nIn round 1 of the census (July 1998 to February 2000), 5,943 and 6,037 eligible individuals in the intervention (total population size 18,104) and control (18,633) communities, respectively, were selected for recruitment into the study cohort (Figure 2). In round 2, 3 y later (July 2001 to February 2003), 1,044 (23%) and 1,144 (26%) of baseline respondents who were still alive had migrated away from the intervention and control communities, respectively, and were therefore lost to follow-up (Figure 2). At both baseline and follow-up, migrants and non-migrants had similar risks of HIV-1 infection and associated behaviour [29]. Of those still resident in the intervention and control communities, 2,664 (75%) and 2,564 (77%), respectively, were interviewed and blood samples taken for a second time. Temporary absence from the usual place of residence was the main reason for non-participation in the intervention (n = 794, 95%) and control (n = 698, 94%) communities. The overall proportions of baseline respondents followed up at the end of the study were 55% and 56% in the intervention and control communities, respectively. The median follow-up of communities was 3.0 y (range of median within communities, 3.0–3.1). Figure 2Flow-Chart Comparing Participation and Follow-Up Rates in the Intervention and Control CommunitiesIndividuals enrolled in round 1 and still resident in the study communities were considered eligible for participation in round 2.\n\nBODY.RESULTS.BASELINE DATA:\nHIV-1 prevalence was higher in the intervention communities than in the control communities (24% versus 21%, risk ratio 1.13 [95% CI 1.05–1.22], p = 0.001). T. vaginalis infection, secondary school education, and spatial mobility were more common in the control communities, whilst history of genital discharge and uptake of STI treatment and VCT services offered in the survey were low overall but more frequent in the intervention communities (Table 1). However, the differences in each case were small and were unlikely to be clinically meaningful. Table 1 Baseline Characteristics of the Study Populations \n\nBODY.RESULTS.OUTCOMES AND ESTIMATION:\nMedian follow-up per person was 2.9 y (range 1.4–3.9) and 3.0 y (range 1.5–4.1) in the intervention and control communities, respectively. In total, 4,052 individuals had 212 incident events of HIV-1 during 12,009 person-years at risk, giving an HIV-1 incidence rate of 1.77 per 100 person-years at risk. HIV-1 incidence was higher in communities with higher baseline HIV prevalence (IRR 11.49 [95% CI 1.80–73.40], p = 0.010), but this difference disappeared after adjustment for stratification by community type (p = 0.8). HIV-1 incidence was higher in the intervention communities than in the control communities overall, and in each community type, except in the forestry plantations where it was almost identical (Table 2). The difference was not significant after adjustment for sex, age group, and baseline HIV prevalence (IRR 1.27 [95% CI 0.92–1.75], p = 0.012). The observed coefficient of between-community variation was 0.14. Table 2 HIV Prevalence at Baseline and HIV Incidence and IRRs for Intervention Versus Control Communities Looking at outcome indicators for community members (rather than for communities—the unit of randomisation), self-reported STI symptoms were similar in both sets of communities (Table 3). Treatment for STI symptoms in males was effective more frequently in the intervention communities, with men in the intervention community in five of the six matched pairs reporting reduced symptom recurrence. However, more young women in the intervention than in the control communities had started sex, and reports of unprotected sex with a casual partner in the study period were more common in the intervention communities. No differences were observed in consistent condom use with regular partners between the two sets of communities. In the intervention communities, knowledge about HIV/AIDS was enhanced amongst men, and more respondents reported a close relative or family member with AIDS (sex- and age-adjusted prevalence odds ratio 1.22 [95% CI 1.05–1.42], p = 0.009). Slightly more women in the intervention communities reported that condom use within marriage was becoming acceptable, but a greater proportion of men agreed with the statement that \"condoms reduce the pleasure of sex\". Table 3 Biomedical, Sexual Behaviour, and Psychological Outcomes at Follow-up by Residence in the Intervention and Control Communities A total of 63,261 peer-education meetings were held, and 6.8 million condoms were distributed by the programme in the intervention communities (Table 4). Outputs increased over time as new communities entered the programme. However, owing to high inflation and economic decline, the micro-credit income-generating projects proved impossible to implement. We were able to obtain data on STI episodes treated at clinics in the 11 out of 12 study communities that reported cases to the administrative districts of Mutasa and Makoni. In the three intervention communities each in Mutasa and Makoni, STI cases fell by 66% and 51%, respectively, over the 3-y study period. Similar declines of 67% and 52% occurred at clinics in the four control communities in Mutasa and the one control community in Makoni. Coverage of training in syndromic STI management and systemic counselling for nursing staff was high (Table 4). Table 4 Summary of Service Statistics on Programme Output Most of the activities were targeted at high-risk groups. In the general population sample interviewed in the follow-up survey, 1,779 (35%) and 647 (13%) of 5,098 respondents reported attending an HIV/AIDS meeting and a programme meeting, respectively (Table 5). More respondents in the intervention communities than in the control communities attended an HIV/AIDS meeting (41% versus 28%, prevalence rate ratio 1.44 [95% CI 1.33–1.56], p < 0.001) and a programme meeting (20% versus 5%, 4.27 [95% CI 3.52–5.17], p < 0.001), and participation was higher among men than women (prevalence rate ratio 1.32 [95% CI 1.14–1.53], p = 0.002). Fewer women in the intervention communities had heard about HIV/AIDS from external sources or believed that STI drugs were available at their local clinics. Sixty-two (2%) out of 2,528 respondents in the control communities reported spending at least 1 d in the past month in the intervention communities; the equivalent number for respondents in the intervention communities visiting control communities was 70 (3%) out of 2,683. Table 5 Intervention Coverage \n\nBODY.RESULTS.ANCILLARY ANALYSES:\nIn exploratory analysis to assess where the intervention failed, we found that HIV-1 incidence was reduced in males (IRR 0.48 [95% CI 0.24–0.98], p = 0.044) who reported attending programme meetings, after adjustment for the targeting of activities to groups with high-risk behaviour (Table 6). Amongst men who reported one or more casual sexual partners in the past 3 y, fewer of those who attended meetings reported unprotected sex with these partners (prevalence odds ratio 0.45 [95% CI 0.27–0.75], p = 0.002). HIV-1 incidence was not associated with programme participation in women. Table 6 HIV Incidence and IRRs by Meeting Attendance and Sex \n\nBODY.DISCUSSION.INTERPRETATION:\nWe conducted a scientific trial of the feasibility and impact of an integrated community- and clinic-based HIV-1–prevention intervention. The income-generating projects apart, the intervention activities were feasible. The outputs of the programme were extensive with more than 63,000 meetings being conducted and almost 7 million condoms distributed by trained peer educators. Programme messages were considered relevant and realistic. Local STI treatment and counselling services were strengthened and promoted in accordance with the intervention protocol. For male participants, these activities improved HIV/AIDS knowledge, increased the effectiveness of STI treatment, increased consistent condom use with casual partners, and reduced HIV-1 incidence. However, the cluster-RCT results clearly show that the intervention had no positive impact at the community level and suggest possible detrimental effects on the onset of female sexual activity and condom use with casual partners over a 3-y timeframe. Did the cluster-RCT design fail to capture the true effect of the intervention? There are three possibilities: (1) inadequate statistical power; (2) insufficient follow-up; and (3) contamination of intervention within control communities. The study design provided adequate statistical power to detect a meaningful average reduction (40%) in HIV-1 incidence in the intervention versus the control communities over a 3-y observation period. In hindsight, an effect size of 40% was too optimistic and the study had insufficient power to detect a smaller effect. However, there was no trend in the results towards reduced HIV-1 incidence in the intervention communities. Largely due to migration, attrition was close to that anticipated in the study design and was comparable to other recent cohort studies [6,10,9,30]. Migrants had similar characteristics and sexual behaviour to non-migrants [29]. The results of the exploratory sub-group analysis generate the hypothesis that high-risk behaviour was reduced in males attending programme meetings but did not translate into a wider impact on HIV-1 incidence at the population level. Changes in core and bridge populations may take more time to reflect in the general population than was observed in the trial. However, a longer period of follow-up would have increased attrition, and the finding of a possible adverse effect at the population level meant that it would not have been ethical to continue with the same intervention. Future trials of behaviour-change interventions may need to include multiple rounds with phased recruitment and (where interim results are favourable) may need to consider phased intervention implementation. We minimised intervention contamination by selecting physically separated study communities, and movements between intervention and control communities were rare. However, a similar peer-education programme was implemented in one control community (small town), and HIV-1–prevention activity was considerable in all control communities that also had greater access to information from external sources. In some cases, programme messages (e.g., promotion of condom use) conflicted with those of other agencies working in the intervention communities. The effects of these other programmes could have limited our ability to detect a reduction in HIV-1 incidence caused by the current intervention. The absence of an observed effect of the intervention was not explained by differences in HIV-1 prevalence, sexual behaviour, STI cofactors, mobility, or socio-demographic composition at baseline. The earlier sexual debut in females and greater unprotected sex with casual partners seen in the intervention communities during the study period were not present at baseline but could reflect increased willingness to report high-risk behaviours in settings where there was more open discourse about HIV and AIDS. The peer-education programme could have had some effect for male but not for unmarried female participants. Preliminary findings from subsequent qualitative investigations indicate that, in the predominantly rural communities in which the study was conducted, poverty and the associated failure of income-generating projects meant that some peer educators were unable to maintain safer behaviours. Given their increased visibility within the community—intended to enhance their status and self-esteem and, thus, to reinforce their commitment to and role as models for behaviour change—they may, inadvertently, have served as negative role models and, thereby, may have contributed to the greater female early-age sexual activity. Free distribution of condoms by women still engaging in unprotected commercial sex led to their being poorly valued and reinforced their association with promiscuity.\n\nBODY.DISCUSSION.GENERALISABILITY OF FINDINGS:\nEpidemiological context can affect the impact of interventions [31], and structural obstacles can limit the pace and extent to which activities are implemented and the quality of these activities [32]. The HIV-1 epidemic stabilised in eastern Zimbabwe during the study period, with HIV-1 prevalence declining by 40%–50% in young adults [23]. This decline was accompanied by delayed sexual debut, reduced sexual partner change, and consistent condom use with casual partners [33,23]. Prevalence of syphilis, gonorrhoea, and Chlamydia is low, but non-curable herpes simplex virus type 2 remains common [22]. Risk reduction makes transmission more fragile, and an intervention could have a larger effect when set against secular behavioural changes [2]. Mathematical model simulations suggest that there would also be a greater chance of detecting a significant effect of the intervention even though there would be fewer seroconversions to power the calculation [34,35]. Structural obstacles to intervention implementation included HIV/AIDS mortality which disrupted the programme by claiming the lives of two programme coordinators and several of the nursing staff and peer-educators. Economic decline made the income-generating projects unfeasible and reduced the effectiveness of other components of the intervention. We believe that the coverage of the peer-education programme was satisfactory, given the focus on highly sexually active individuals. Meeting coverage could have been under-estimated in the survey since one-to-one discussions and activities at beer halls and other public places may not have been recognised as meetings by those present. However, the high level of spatial mobility limited the number of people who were reached at the required level of intensity and consistency, whilst national shortages of foreign currency restricted fuel and drug supplies, hampered attempts to extend community activities into the more remote rural areas, and disrupted the STI treatment programme in both the intervention and control communities. The intervention that we evaluated could have greater effect where an HIV-1 epidemic is younger, HIV-1 incidence is greater, local sexual networks are less diffuse, background STI control is weak, herpes simplex virus type 2 is less common, population mobility is lower, and/or the socio-economic climate is stable. We cannot rule out an effect of peer education in the urban intervention community since similar activities were implemented in the control community. Targeted peer education may work better in towns where bar-based sex work is more extensive. The absence of reduced HIV-1 incidence in farming estates reinforces doubts raised by the Harare factory workers study [8] concerning the efficacy of workplace peer education.\n\nBODY.DISCUSSION.OVERALL EVIDENCE:\nThese findings are important since the strategies evaluated—i.e., peer education, condom distribution, and syndromic STI management—are theory-based, have the potential for independent effects [11], and are widely promoted [36,37]. Syndromic STI management was effective in a nascent epidemic [6]. However, our disappointing findings echo those from recent trials [9,12] and emphasise the need for alternative strategies of behaviour-change promotion. Social marketing of condoms [14], larger poverty-alleviation programmes to reduce women's reliance on sex work—based on skills training and careful market research rather than on small-scale income-generating projects—and strategies which reach beyond high-activity core groups, such as the Popular Opinion Leader programme [38,39], and client-centred counselling [40], could be more viable and effective in reducing HIV-1 transmission in rural areas. Given the necessary economic conditions, unmarried women may still play a useful role in bar-based programmes since beer halls remain foci for high-risk behaviour [41,15].\n\nBODY.SUPPORTING INFORMATION.TRIAL REGISTRATION:\nThis trial has the registration number ISRNCT00390949 in the International Standard Randomized Controlled Trial Number Register. Found at: http://www.clinicaltrials.gov/ct/show/NCT00390949?order=1\n Protocol S1Protocol(35 KB DOC)Click here for additional data file. Protocol S2Revisions to Protocol(35 KB DOC)Click here for additional data file. Text S1CONSORT Checklist(48 KB DOC)Click here for additional data file. Text S2Ethical Approval, Information Letter, and Consent Forms(2.8 MB PDF)Click here for additional data file.\n\n**Question:** Compared to Baseline communities that received standard Government services what was the result of Baseline communities that received additional education and preventive measures on Patients infected with T. vaginalis?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
282
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomised phase II evaluation of irinotecan plus high-dose 5-fluorouracil and leucovorin (ILF) \n\n ABSTRACT:\nAn open-label randomised comparison of efficacy and tolerability of irinotecan plus high-dose 5-fluorouracil (5-FU) and leucovorin (LV) (ILF) with etoposide plus 5-FU/LV (ELF) in patients with untreated metastatic or locally advanced gastric cancer. One cycle of ILF comprised six once-weekly infusions of irinotecan 80 mg m−2, LV 500 mg m−2, 24-h 5-FU 2000 mg m−2, and ELF comprised three once-daily doses of etoposide 120 mg m−2, LV 300 mg m−2, 5-FU 500 mg m−2. In all, 56 patients received ILF and 58 ELF. Median age was 62 years, Karnofsky performance 90%, and disease status was comparable for both arms. The objective clinical response rates after 14 weeks treatment (primary end point) were 30% for ILF and 17% for ELF (risk ratio (RR) 0.57, 95% confidence interval (CI) 0.29–1.13, P=0.0766). Overall response rates over the entire treatment period for ILF and ELF were 43 and 24%, respectively (RR 0.56, 95% CI 0.33–0.97; P=0.0467). For ILF and ELF, respectively, median progression-free survival was 4.5 vs 2.3 months, time to treatment failure was 3.6 vs 2.2 months (P=0.4542), and overall survival was 10.8 vs 8.3 months (P=0.2818). Both regimens were well tolerated, the main grade 3/4 toxicities being diarrhoea (18%, ILF) and neutropenia (57%, ELF). The data from this randomised phase II study indicate that ILF provides a better response rate than ELF, and that ILF should be investigated further for the treatment of metastatic gastric cancer.\n\nBODY:\nGastric cancer is the fourth most common cancer in Europe and the third leading cause of cancer mortality (Bray et al, 2002). Although gastric cancer has declined over the past 50 years, the incidence of tumours at the gastro-oesophageal junction has increased (Blot et al, 1991). The use of chemotherapy for the management of patients with advanced gastric cancer, who have limited treatment options (Hohenberger and Gretschel, 2003), has only become widely acceptable over the last 20 years (Glimelius et al, 1997; Murad et al, 1993; Pyrhonen et al, 1995). 5-Fluorouracil (5-FU), usually in combination with leucovorin (LV, also referred to as folinic acid), forms the basis of most chemotherapy regimens used for the treatment of gastric cancer. A randomised phase III trial compared combinations of 5-FU with other active drugs in advanced gastric cancer: etoposide, LV and 5-FU (ELF) vs infusional 5-FU plus cisplatin (FUP) vs 5-FU, doxorubicin and methotrexate (FAMTX) (Vanhoefer et al, 2000). The overall response rates (ORRs) ranged from 9% (ELF) to 20% (FUP) and median survival times were between 6.7 months (FAMTX) and 7.2 months (both ELF and FUP). The observed differences were not statistically significant and there is still no definitive regimen for the treatment of gastric cancer. The combination of epirubicin, cisplatin and continuous infusion 5-FU (ECF) has been proposed as a standard first-line therapy for gastric cancer as a consequence of its significantly improved response rate (46%) and survival (8.7 months) when compared with FAMTX (Waters et al, 1999). More recently, ELF has been shown to provide better disease control (complete response (CR)+partial response (PR)+stable disease (SD)) for patients with proximal rather than distal tumours (85 vs 48%, P=0.04) (Schulze-Bergkamen et al, 2002). Tolerability, toxicity and ease of administration have become major determinants for selecting an appropriate therapy and ELF has emerged as a convenient, well-tolerated regimen that can be administered on an outpatient basis (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002). Irinotecan (CPT-11, Camptosar; Pfizer Oncology, New York, USA) inhibits topoisomerase I thereby disrupting DNA replication and cell division within tumour cells. Response rates between 20 and 23% have been reported for irinotecan monotherapy in untreated gastric cancer (Futatsuki et al, 1994; Kohne et al, 2003). In patients who had failed previous therapy, irinotecan (180 mg m−2) combined with 5-FU (400 mg m−2, bolus) and LV (125 mg m−2) followed by 5-FU infusion (1200 mg m−2 over 48 h) yielded a response rate of 29%, while a further 34% of patients achieved SD (Assersohn et al, 2004). Irinotecan with bolus-LV/5-FU in the first-line treatment of gastric cancer provided a response rate of 22% (Blanke et al, 2001). However, this regimen (irinotecan 125 mg m−2, LV 20 mg m−2 plus 5-FU 500 mg m−2, all given weekly for 4 weeks followed by a 2-week rest) was associated with a high incidence of severe diarrhoea (28%) and neutropenia (36%) infection leading to substantial dose modifications (Blanke et al, 2001). By comparison, the combination of irinotecan with continuous rather than bolus infusions of LV/5-FU exhibited a lower incidence of grade 3 and 4 toxicities in colorectal cancer patients (Douillard et al, 2000; Saltz et al, 2000; Bouzid et al, 2003). Therefore, we have investigated a weekly dose of irinotecan (80 mg m−2) in combination with LV (500 mg m−2) and continuous 5-FU (2000 mg m−2 over 24 h) according to the AIO (Arbeitsgemeinschaft Internistische Onkologie) regimen (i.e. ILF) in gastric cancer patients. In a previous phase I study of ILF in the first- and second-line treatment of gastric cancer, we observed a response rate of 20% with a further 36% of patients reporting SD (Moehler et al, 2003). Importantly, toxicity was sufficiently manageable to allow outpatient-based treatment. Therefore, we initiated the present randomised, controlled, phase II study to compare the efficacy and safety of ILF with ELF in the first-line treatment of metastatic gastric cancer.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nEligible patients had untreated histologically proven gastric adenocarcinoma, or adenocarcinoma of the oesophagogastric junction with measurable metastatic disease and/or locally recurrent nodal involvement, were aged between 18 and 75 years with a Karnofsky performance score (KPS) ⩾60 and a life expectancy >12 weeks. Patients were required to have adequate haematological (neutrophils ⩾2.0 × 109 l−1, platelets ⩾150 × 109 l−1; haemoglobin ⩾10g dl−1), hepatic (total bilirubin ⩽1.25 × upper normal limit (UNL); aspartate (AST) and alanine (ALT) aminotransferases ⩽3 × UNL) and renal function (creatinine <1.25 × UNL). Patients with previous cancer therapies were excluded from the study. All patients provided signed and dated consent before entering the trial. The study was conducted in accordance with the principles of the Declaration of Helsinki and Good Clinical Practice guidelines and the protocol was initially approved by the Ethics committee of Aerztekammer Rheinland-Pfalz and later by all Ethics committees responsible for participating centres.\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN AND RANDOMISATION:\nThis was an open-label, multicentre, phase II randomised trial with two treatment arms. Patients were randomly assigned and stratified according to centre, peritoneal involvement (yes/no) and prior gastrectomy (yes/no). The randomisation process was centralised and performed by the Coordination Centre for Clinical Trials (KKS), Mainz, Germany.\n\nBODY.PATIENTS AND METHODS.ADMINISTRATION OF STUDY DRUGS AND DOSE ADJUSTMENT:\nPatients assigned to ILF (Arm A) received irinotecan 80 mg m−2 intravenously (i.v.) over 60–90 min followed by LV 500 mg m−2 i.v. over 60 min and then 5-FU 2000 mg m−2 i.v. over 24 h, on day 1. Each cycle comprised six once-weekly treatments followed by a 13-day rest period. Systemic prophylactic atropine (0.25 mg) injections for irinotecan-related acute cholinergic symptoms were allowed for the first cycle but not recommended. Prophylactic treatment for delayed diarrhoea was not permitted. However, patients were carefully informed of the potential risk of delayed diarrhoea and neutropenia and the need for early intervention with loperamide, metoclopramide, antibiotics, or hospitalisation and parenteral rehydration in case of refractory diarrhoea (>48 h). Antiemetic treatment was performed using metoclopramide or HT-3 antagonists in a sequential manner. The prophylactic use of colony-stimulating factors was not permitted. Patients assigned to ELF (Arm B) received etoposide 120 mg m−2 i.v. over 60 min, LV 300 mg m−2 i.v. over 5–10 min and then 5-FU 500 mg m−2 bolus i.v. over 2–4 min, on day 1. Each cycle comprised three applications on consecutive days (1–3) followed by an 18-day rest. All study treatments were administered until disease progression, unacceptable toxicity or withdrawal of consent. In the event of toxicity (defined by the National Cancer Institute of Canada expanded common toxicity criteria; NCIC-CTC), treatment delays or dose reductions could be applied as follows. If at any time during a cycle there were moderate reductions in haematological function (neutrophil count 0.5–1.5 × 109 l−1, platelet count 25–75 × 109 l−1) or moderate diarrhoea or stomatitis (>grade 1), the next administration could be delayed for up to 2 weeks. If at any time haematological abnormalities were noted (neutrophils <0.5 × 109 l−1, neutrophils <1 × 109 l−1 with infection or fever, platelets <25 × 109 l−1) or if there were ⩾grade 3 or 4 diarrhoea or stomatitis, treatment had to be delayed until recovery to moderate levels (as described above) after which the following dose reductions were applied. For Arm A, 5-FU was reduced to 1600 mg m−2 and irinotecan to 65 mg m−2. For Arm B, in the case of haematological toxicity, etoposide had to be reduced to 100 mg m−2 and 5-FU to 400 mg m−2, and in the case of diarrhoea or stomatitis, 5-FU was reduced to 400 mg m−2. If a condition persisted despite dose reduction, or if a patient experienced myocardial infarction, treatment was terminated. In the case of hand–foot syndrome, the dose of 5-FU was to be reduced by 20%. Delayed diarrhoea was treated immediately with loperamide and rehydration and, if associated with severe neutropenia, a broad-spectrum antibiotic. Hospitalisation with i.v. rehydration was required for grade 4 or persistent (>48 h) diarrhoea, concomitant vomiting, fever, or KPS <60%.\n\nBODY.PATIENTS AND METHODS.STUDY EVALUATIONS:\nAt baseline up to five measurable lesions per organ and 10 lesions in total were to be identified as target lesions, measured using computed tomography (CT), and recorded according to the RECIST system (Response Evaluation Criteria In Solid Tumours; Therasse et al, 2000). The sum of the longest diameters for all target lesions was used as a reference for determining objective tumour response. Tumour responses were evaluated at week 7, week 14 and then every two cycles for patients receiving ILF or every four cycles for patients receiving ELF. Responses were determined according to RECIST as follows: complete response was defined as the disappearance of all target and nontarget lesions with no new lesions and confirmed by two observations at least 4 weeks apart; PR was defined as a reduction of 30% or more in the sums of the longest diameters of all measurable lesions relative to baseline with no new lesions; no change (NC) was defined as neither sufficient shrinkage to qualify for PR nor sufficient increase to qualify for progressive disease (PD) with no new lesions; and PD was defined as ⩾20% increase in the sum of the longest diameters, the occurrence of nontarget lesions (e.g. pleural effusion or ascites) or the appearance of brain metastases independently of performance at sites outside the brain. Safety and tolerability were assessed by regular clinical examinations and assessments of adverse events (weekly, at the end of treatment and at every 3 months of follow-up), disease symptoms, KPS, haematological and biochemical parameters.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS:\nThe primary end point was objective clinical response (CR+PR) based on an interim analysis following 14 weeks of treatment. The one-sided exact Fisher's test was used to compare the treatment arms at the significance level α=5%. The analyses were performed on an intention-to-treat basis including all patients who were treated in the study. The secondary end points were ORR (for the entire treatment period), time to progression, tumour growth control, time to treatment failure (including progression, death or withdrawal) and survival. Time to event data were described by Kaplan–Meier estimates and treatment groups were compared by log-rank test. Time to event data were further evaluated by appropriate proportional Cox's models and results were summarised by hazard ratio point and 95% confidence interval (CI) estimates, and P-values of Wald χ2 test. Binary data were described by risk ratio (RR) point and 95% CI estimates and treatment groups were compared by exact Fisher's test. Binary data were further evaluated by appropriate logistic regression models and were summarised by odds ratio point and 95% CI estimates and P-values of Wald χ2 test. If not specified otherwise, P-values are presented from two-sided tests and two-sided 95% CI are presented. All analyses were performed using SAS version 6.12.\n\nBODY.RESULTS.PATIENT CHARACTERISTICS:\nIn all, 120 patients from 17 centres in Germany were randomised into the study between November 2000 and April 2003. Two patients from Arm A and four from Arm B withdrew without receiving study treatment; therefore, the intention-to-treat population contained 114 patients (56 received ILF and 58 received ELF). The baseline characteristics were well balanced between the two treatment groups (Table 1). The median age of patients was 62 years and the median KPS was 90%. In around one-third (31%) of patients, the primary tumour site was the oesophagogastric junction, 62% of patients had liver metastases and in 77% two or more organs were involved.\n\nBODY.RESULTS.RESPONSE RATES:\nThe objective clinical response rates following 14 weeks of treatment (primary end point) were 30% for ILF compared with 17% for ELF (RR 0.57, 95% CI 0.29–1.13, P=0.0766). The ORRs for the entire treatment period and including all assessments prior to discontinuation were 43% (24 of the 56) for ILF and 24% (14 of the 58) for ELF (Table 2). The increased response rate provided by ILF compared with ELF was statistically significant (RR=0.56; 95% CI=0.33–0.97; P=0.0467.) The tumour control rates (CR+PR+NC) were 63% (35 of the 56) and 41% (24 of the 58), respectively. Logistic regression analysis indicated that a baseline KPS ⩽80% reduced the likelihood of a response by 59% compared with patients whose KPS was greater than 80% (P=0.038) (Table 3). After adjustment for KPS, peritoneal involvement and surgery for primary tumour, the regression model also demonstrated that ILF was 138% more likely to provide a response when compared with the ELF regimen (Table 3, P=0.042).\n\nBODY.RESULTS.PROGRESSION-FREE SURVIVAL, TREATMENT FAILURE AND OVERALL SURVIVAL:\nAt the last data cutoff, the median follow-up was 9.4 months in Arm A and 5.8 months in Arm B. At this time, 96 of the 114 patients had died. Disease progression was the major cause of death and accounted for 79% of patients in both treatment groups. One patient from the ILF arm did not comply with the provided recommendations for the treatment of prolonged grade 3 diarrhoea and consequently died (i.e. toxic death). One patient in the ELF arm died from a cardiovascular event. Compared with ELF, the ILF regimen extended median progression-free survival, median time to treatment failure and median overall survival (Table 4). However, when the treatment groups were compared by log-rank test, there was no significant difference between the two treatments for any of these parameters (e.g. the Kaplan–Meier survival plot as shown in Figure 1). Investigational analyses found that the risk of progression was increased in patients with a primary tumour in the oesophagogastric junction and in those with metastatic involvement in two or more organs (Table 3). As would be expected, the risk of death was increased in patients with a low KPS, in those with two or more involved organs and in those with peritoneal involvement who received ELF (Table 3).\n\nBODY.RESULTS.SAFETY:\nThe median number of cycles administered in the study was two for ILF (Arm A) and three for ELF (Arm B) (Table 5). Although the median treatment duration period was over twice as long with ILF than with ELF, there were more dose administration delays (70%) and dose reductions (75%) with ILF than with ELF (52 and 45%, respectively). The main reason for discontinuing study treatment was disease progression; 54% of patients receiving ILF and 72% receiving ELF. Although only one patient in each treatment group withdrew because of treatment-related toxicity, five patients receiving ILF and three patients receiving ELF either withdrew consent or refused further treatment. The incidence of grade 3/4 haematological toxicities was low in both treatment groups with the exception of neutropenia, which was reported by 57% of patients receiving ELF (Table 6). There were more grade 3/4 gastrointestinal toxic events with ILF, notably diarrhoea, which was reported by 18% of patients compared with no reports with ELF. Grade 3/4 alopecia was reported by a significant proportion of patients receiving ELF (28%), but was only seen in 5% of those receiving ILF.\n\nBODY.DISCUSSION:\nAlthough chemotherapy regimens offer at best a slight, albeit statistically significant, improvement in survival for patients with gastric cancer, they are associated with a degree of toxicity that limits their value as a palliative treatment (Vanhoefer et al, 2000; Schoffski, 2002; Diaz-Rubio, 2004). The primary end point of clinical response at 14 weeks was selected so that a statistical comparison at a fixed time point could be made. However, as it is the convention in such studies, patients were treated until progression, and could respond to treatment at a later point. Therefore, the overall response and survival rates obtained from the entire dosing period provide a more clinically significant assessment of the efficacy of these regimens for discussion in relation to other trials in gastric cancer. The irinotecan-based combination provided again a greater ORR than that seen with the commonly used ELF regimen (43 vs 24%, respectively, P=0.0467). Overall response rates for ELF reported in previous studies range from 9 to 23% (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002) and this compares well with the 24% response rate reported in this study. Accordingly, an ORR of nearly 50% for ILF, as seen in this study, is a substantial improvement and is in the range of previous reports of the use of this drug combination in this setting (Blanke et al, 2001; Moehler et al, 2003). The overall survival data in the present study also compare well with those from previous studies. The median overall survival with ELF has been reported at 7.2 and 8.0 months (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002), which is similar to both the 8.3 months reported here and the data reported for irinotecan-based regimens in the second-line setting between 7.0 and 7.6 months (Moehler et al, 2003; Assersohn et al, 2004). By comparison, there was a nonsignificant trend for increased median survival with ILF in this study (10.8 months) and this compares well with data reported for more recent exploratory combinations such as capecitabine and docetaxel (10.5 months), and epirubicin, docetaxel and cisplatin (11.0 months) (Lee et al, 2004; Park et al, 2004). The same can be said of the progression-free survival period in the ILF group of 4.5 months, which compared well with the 4.1 and 5.2 months reported recently for docetaxel-based regimens (Lee et al, 2004; Park et al, 2004). In other randomised phase II studies, continuous 5-FU/LV infusion plus irinotecan has also provided promising efficacy (ORRs of 40–42%, median progression-free survival periods of 6.5–6.9 months and median overall survival periods of 10.7–11.3 months; Bouche et al, 2004; Pozzo et al, 2004). Consequently, large phase III studies are being considered to investigate irinotecan in combination with continuous 5-FU/LV infusion regimens. When patient histories, disease status and other factors were examined for their effects on clinical outcome, those patients who were in better general health (good performance status, low tumour burden) were more likely to achieve a response and less likely to have a progression event or die, regardless of the treatment arm to which they were randomised. Patients with peritoneal involvement at presentation have a generally poorer prognosis and as a group face a desperate need for improved treatment options (Rau et al, 1996). The data from this study demonstrated that these patients are less likely to suffer a fatal event if treated with ILF rather than ELF. This is potentially an important observation for the management of these difficult to treat patients (Blot et al, 1991; Bray et al, 2002). The extension of meaningful survival remains a major objective for oncologists who must therefore consider the impact of treatment-related toxicity. Overall, the occurrence of the toxicities in this study was consistent with the safety profiles of irinotecan, etoposide and 5-FU/LV. The ILF combination was well tolerated with a low and acceptable incidence of haematological toxicity. Gastrointestinal toxicity is a recognised side effect of ILF therapy (Douillard et al, 2000; Saltz et al, 2000), which can require hospitalisation and urgent medical intervention (Rothenberg et al, 2001). The incidence of grade 3 or 4 diarrhoea in the current study was comparable to the previous data in gastric cancer (Blanke et al, 2001; Moehler et al, 2003; Pozzo et al, 2004). With close monitoring of the patient, suitable medication and rehydration, most cases of diarrhoea can be managed effectively and do not present a significant obstacle to the clinical use of ILF. The toxicity observed in our study was lower than that reported by Douillard et al in the pivotal European first-line trial where patients with colorectal cancer received weekly irinotecan (80 mg m−2) plus an AIO-based regimen of 24-h high-dose 5-FU (2300 mg m−2) preceded by 2-h LV 500 mg m−2, and grade 3/4 diarrhoea was reported by 44% of patients (Douillard et al, 2000). The lower toxicity in our study might be due to the lower daily doses of 5-FU (2000 mg m−2 administered over 24 h). Work is ongoing to identify those patients who carry a specific genetic polymorphism in one of the main enzymes (UGT1A1) involved in the detoxification of irinotecan and are therefore more susceptible to the side effects of irinotecan (Mathijssen et al, 2001). Such work will improve the targeting of this useful therapy and may allow appropriate prescriptive dosing schedules on an individual basis. The present study concurs with similar phase II studies in that the combination of irinotecan with continuous LV/5-FU (ILF) represents a potentially valuable new treatment option for metastatic gastric cancer and requires further evaluation (Bouche et al, 2004; Pozzo et al, 2004).\n\n**Question:** Compared to Etoposide with 5-fluorouracil plus leucovorin (ELF) what was the result of Irinotecan with high-dose 5-fluorouracil plus leucovorin (ILF) on Time to treatment failure?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
385
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of Bladder Training and/or Tolterodine in Female Patients with Overactive Bladder Syndrome: A Prospective, Randomized Study\n\n ABSTRACT:\nWe compared the effects of bladder training and/or tolterodine as first line treatment in female patients with overactive bladder (OAB). One hundred and thirty-nine female patients with OAB were randomized to treatment with bladder training (BT), tolterodine (To, 2 mg twice daily) or both (Co) for 12 weeks. Treatment efficacy was measured by micturition diary, urgency scores and patients' subjective assessment of their bladder condition. Mean frequency and nocturia significantly decreased in all treatment groups, declining 25.9% and 56.1%, respectively, in the BT group; 30.2% and 65.4%, respectively, in the To group; and 33.5% and 66.3%, respectively in the Co group (p<0.05 for each). The decrease in frequency was significantly greater in the Co group than in the BT group (p<0.05). Mean urgency score decreased by 44.8%, 62.2% and 60.2% in the BT, To, and Co groups, respectively, and the improvement was significantly greater in the To and Co groups than in the BT group (p<0.05 for each). Although BT, To and their combination were all effective in controlling OAB symptoms, combination therapy was more effective than either method alone. Tolterodine alone may be instituted as a first-line therapy, but may be more effective when combined with bladder training.\n\nBODY.INTRODUCTION:\nOveractive bladder (OAB), defined as increased frequency and urgency with or without urge incontinence, is a very common condition, especially in middle-aged women, with a reported incidence of approximately 17% of the general female population (1, 2). Since many women may consider OAB as a natural aging phenomenon and thus do not seek treatment, its actual prevalence may be even higher. Despite its common occurrence, however, our understanding (3-5) of the pathophysiology underlying the development of OAB has remained elementary. It is managed most commonly by drug therapy, followed by various behavioral interventions and, less frequently, by surgical methods. Each of these methods, however, has variable efficacy and adverse effects. Classically, urge incontinence has been considered most important, being the key index in treatment outcome assessments. However, since the new consensus by the International Continence Society on the diagnosis of OAB (6), the presence or absence of actual urine leakage was regarded as less important. With the adaptation of this new definition, the number of patients newly diagnosed with OAB is expected to increase, emphasizing the need to develop first-line interventions for their symptoms. The three most commonly employed methods for treating newly-diagnosed OAB are bladder training, anticholinergic therapy and a combination of the two. Here we report a controlled, randomized study comparing the clinical efficacy of these three methods, for the purpose of determining the most appropriate first-line therapeutic modality.\n\nBODY.MATERIALS AND METHODS:\nBetween May 2001 and April 2002, 139 women presenting with symptoms of urgency and frequency, with or without urge incontinence, were prospectively enrolled in this study. Inclusion criteria included being 18 yr of age or older, having a urination frequency of 8 or more times per day, the presence of urge symptoms that may or may not accompany incontinence, symptom duration of 3 months or longer and no prior history of treatment for OAB. We excluded patients with active urinary tract infection, clinically significant stress urinary incontinence, bladder outlet obstruction, interstitial cystitis, glaucoma or megacolon. Also excluded were patients with a maximal urine flow rate of less than 10 mL/sec or a postvoid residual urine amount that was more than 30% of the total amount voided on uroflowmetry. Upon enrollment, patients were randomly assigned to bladder training (BT, n=46), tolterodine treatment (To, 2 mg twice daily, n=47) or a combination of the two (Co, n=46). Each patient received a physical examination and uroflowmetry with postvoid residual urine scan. In addition, each patient kept a record of urgency scores, with 0 indicating no symptoms, 1 rarely, 2 occasionally, 3 often, and 4 all the time, and a frequency-volume chart. At 12 weeks, satisfaction score was assessed, with 0 representing fully satisfied, 1 much improved, 2 slightly improved, 3 no improvement, and 4 worsened; a change of 2 points or more was considered symptom improvement. The frequency-volume chart and a survey of adverse effects of medication were repeated every 4 weeks. Patients in the BT and Co groups were educated of the bladder drills by a nurse specialist. Gross and functional anatomy of the lower urinary tract and pelvis were demonstrated, with instructions on how bladder drills can increase functional bladder capacity and actual voiding habits. Each patient was instructed to examine her frequency-volume chart to determine the longest interval she could initially hold urination and sustain urge symptoms. Each patient was then taught to increase this interval by 15-min increments, with the aim of achieving a 3 to 4 hr interval and a voided volume of 300 to 400 mL. In addition, whenever urgency occurred, patients were educated to perform Kegel exercises for symptom alleviation. All bladder drill education was performed by one nurse specialist, who telephoned each patient every 2 weeks to make sure she was following the exact drill taught. In the Co group, the bladder drill was started at the same time as the start of medication. Of the 139 patients, 89 (64.0%) completed the 12 week treatment regimen and were subjects of outcome analyses. Voiding frequency, nocturia, urgency score before and after the treatment, satisfaction score at 12 weeks and adverse events were measured in the three groups. Student's t-test was used for statistical analysis, and p<0.05 was defined as statistically significant.\n\nBODY.RESULTS:\nPatient characteristics, including age, weight, symptom severity and duration, parity, pretreatment uroflowmetry profile, presence and degree of urge or stress incontinence, and urgency score, were similar among the three treatment groups (Table 1). At 12 weeks, 26 patients (56.5%) in the BT group, 32 (68.1 %) in the To group, and 31 (67.4%) in the Co group had completed the protocol. Mean frequency in the three groups decreased significantly, from 10.9 to 8.1 (25.9%) in the BT group, from 11.6 to 8.1 (30.2%) in the To group, and from 11.9 to 7.9 (33.5%) in the Co group (p<0.05 each) (Fig. 1). Nocturia also decreased significantly in the three groups, from 1.5 to 0.6 per night (56.1%) in the BT group, from 1.7 to 0.6 (65.4%) in the To group, and from 2.0 to 0.6 (66.3%) in the Co group (p<0.05 each) (Fig. 2), as did urgency scores, from 2.6 to 1.4 (44.8%) in the BT group, from 2.8 to 1.1 (62.2%) in the To group, and from 3.0 to 1.2 (60.2%) in the Co group (p<0.05 each) (Fig. 3). When we compared these changes among the three groups, we found that patients in the Co group showed greater improvements in frequency and urgency scores than did patients in the BT group (p<0.05 each), and that urgency scores in the To and Co groups showed significantly greater improvement than in the BT group (p=0.017 and p=0.021, respectively). No difference was observed between the To and Co groups. Satisfaction scores at 12 weeks did not differ among the three groups, being 1.5 in the BT group, with improvement in 53.9%; 1.4 in the To group, with improvement in 63.0%; and 1.3 in the Co group, with improvement in 71.0%. No adverse events were reported from the BT group, whereas 13 patients (40.6%) in the To group and 12 (38.7%) in the Co group reported one or more adverse events (Table 2). Most of the events were mild and well-tolerated, except that severe dry mouth occurred in 2 patients from each group, which led them to discontinue medication (withdrawal rates, 6.3% and 6.5%, respectively).\n\nBODY.DISCUSSION:\nDespite extensive research on the complicated interactions involving the neural network of the cerebrum, sacral cord and the detrusor, little is known regarding the pathologic process that occurs during the development of OAB (4, 5, 7). Accordingly, therapeutic approaches for OAB have been diverse, consisting largely of medical therapy, but also including behavioral interventions such as bladder training, pelvic floor muscle exercises with or without biofeedback, and electrical stimulation. Other therapeutic approaches have included neuromodulation of the sacral nerves, intravesical instillations of antimuscarinics and bladder overdistention, with augmentation cystoplasty remaining as the last resort. Naturally, non-invasive methods, including medical therapy and behavioral intervention, have been the initial treatments of choice for OAB symptoms. Although it is not known how bladder drills control and sustain OAB symptoms, it is believed that ill-trained voiding habits leading to decreased bladder compliance by frequent voiding are corrected by progressively increasing the voiding interval, thus increasing the voided volume and the functional bladder capacity. Retraining these patients to void at normal intervals would restore compliance when capacity increases and may contribute to symptom improvement (8-11). Observations on the electrical stimulation of the pelvic floor muscles (12) have shown that, during electrical stimulation, hyperactivity of the bladder was either diminished or completely abolished. This mechanism has been exploited as part of bladder training regimens. Clinically, bladder training has been shown to be effective in the management of urgency and urge incontinence (8, 13-15). When bladder training was compared with medical therapy (200 mg flavoxate hydrochloride plus 25 mg imipramine three times daily for 4 weeks), urge incontinence was completely controlled in 84% of patients in the BT group, with symptom relief in 76%, compared with 56% and 48%, respectively, in the medication group (16). A larger study (17), however, found that addition of behavioral interventions did not seem to offer any additional benefit to medical therapy (tolterodine) after 6 months. Urgency episodes and daytime frequency decreased 83% and 27.3%, respectively, in the tolterodine group, compared with 78.7% and 23%, respectively, in the combination group. Improvements in patients' perception of urinary symptoms were reported by 85.9% of the tolterodine group and 81.7% of the combination group. We found that all patients, a homogeneous group of female patients, regardless of treatment methodology, reported significant improvements in all voiding parameters examined, but that the addition of medication to BT resulted in additional improvements in urgency and frequency. Our finding of better outcomes in the Co than in the To group may be due to the bladder training program and design used here, which was a short-term intensive course of bladder drills. As many authors have emphasized, patients' understanding and compliance are the key factors in bladder training. Accordingly, the exercise program that we used did not simply require patients to contract and relax their pelvic floor muscles for certain periods of time. Instead, patients were given specific goals of 15 min each in sustaining contractions and were instructed how to increase these intervals. Having such a specific aim of what to accomplish with each exercise must have been enforcing and rewarding as well. In addition, repeated instructions, confirmation and reassurance were essential and constantly encouraged patients to stay on the right track for 12 weeks. Similar results have been reported (18), in which patients were provided with written information sheets describing the aims and endpoints of therapy as well as specific instructions on bladder drills. Although there were no formal bladder training sessions or additional follow-up for those in the bladder training-only group, the self-explanatory information sheets the patients kept would have reinforced and encouraged patients while on bladder drills. Thus, patients on medication plus bladder training reported significantly better results after therapy than did patients on medication alone. In the previous study, in which the drills were simplified, it may have facilitated patient adherence to treatment, but without any further reinforcements, the therapeutic benefits of BT may have been compromised (17). In this study, we found that patient adherence to treatment was lowest and discontinuation rate highest in the BT group, although the treatment efficacy results were similar at the end of 12 weeks. This may be attributed to the longer duration of patient maneuvering required before experiencing significant improvement in this BT-only group. Thus, despite many efforts by the physician to make patients stay on the correct treatment regimen, it was evidently easier for patients to take medication twice a day than to repeat exercises many times a day for months. We noted no adverse effects in the BT group, whereas, in the other 2 groups, the most common adverse effect was dry mouth. We found that 6.3% of patients in the BT group and 6.5% in the Co group had to discontinue medication due to severe dry mouth, percentages lower than those previously reported (19, 20). These results indicate that 4 mg/day tolterodine is safe and does not require dose adjustment. From these results, we conclude that bladder training, tolterodine and the combination of the two are all effective in controlling OAB symptoms, with comparable patient satisfaction. Since a short-term intensive course of bladder drill is as effective as medical or combined therapy in all voiding symptoms, but does not have any adverse effects, bladder training can be safely and effectively employed as first line treatment for OAB. In patients who have predominantly symptoms of frequency or urgency, tolterodine may be more effective, especially when combined with bladder training.\n\n**Question:** Compared to Bladder training only (BT) what was the result of Bladder training plus tolterodine (Co) on Urgency score improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
467
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Comparison of intravenous immunoglobulin and plasma exchange in treatment of mechanically ventilated children with Guillain Barré syndrome: a randomized study\n\n ABSTRACT.INTRODUCTION:\nRespiratory failure is a life threatening complication of Guillain Barré syndrome (GBS). There is no consensus on the specific treatment for this subset of children with GBS.\n\nABSTRACT.METHODS:\nThis was a prospective randomized study to compare the outcome of intravenous immunoglobulin (IVIG) and plasma exchange (PE) treatment in children with GBS requiring mechanical ventilation. Forty-one children with GBS requiring endotracheal mechanical ventilation (MV) within 14 days from disease onset were included. The ages of the children ranged from 49 to 143 months. Randomly, 20 children received a five-day course of IVIG (0.4 g/kg/day) and 21 children received a five-day course of one volume PE daily. Lumbar puncture (LP) was performed in 36 patients (18 in each group).\n\nABSTRACT.RESULTS:\nBoth groups had comparable age (p = 0.764), weight (p = 0.764), duration of illness prior to MV (p = 0.854), preceding diarrhea (p = 0.751), cranial nerve involvement (p = 0.756), muscle power using Medical Research Council (MRC) sum score (p = 0.266) and cerebrospinal fluid (CSF) protein (p = 0.606). Children in the PE group had a shorter period of MV (median 11 days, IQR 11.0 to 13.0) compared to IVIG group (median 13 days, IQR 11.3 to 14.5) with p = 0.037. Those in the PE group had a tendency for a shorter Pediatric Intensive Care Unit (PICU) stay (p = 0.094). A total of 20/21 (95.2%) and 18/20 (90%) children in the PE and IVIG groups respectively could walk unaided within four weeks after PICU discharge (p = 0.606). There was a negative correlation between CSF protein and duration of mechanical ventilation in the PE group (p = 0.037), but not in the IVIG group (p = 0.132).\n\nABSTRACT.CONCLUSIONS:\nIn children with GBS requiring MV, PE is superior to IVIG regarding the duration of MV but not PICU stay or the short term neurological outcome. The negative correlation between CSF protein values and duration of MV in PE group requires further evaluation of its clinical usefulness.\n\nABSTRACT.TRIAL REGISTRATION:\nClinicaltrials.gov Identifier NCT01306578\n\nBODY.INTRODUCTION:\nGuillain-Barré syndrome (GBS) is, currently, the most common cause of acute flaccid paralysis following the worldwide decline in incidence of poliomyelitis. Incidence varies according to age, geographic areas and diagnostic criteria used for inclusion. Annual incidence in western countries varies from 1.1 to 1.8/100,000 population per year [1-5] with a considerably lower annual incidence of 0.66/100,000 population per year in each of Taiwan [6] and China [7]. GBS usually follows infection by a number of bacterial and viral agents with Campylobacter jejuni representing the most common preceding infection [8-11]. The syndrome is also reported to rarely temporally follow vaccination with measles vaccine [12,13], tetanus toxoid [14], rabies vaccine [15], oral polio vaccine [16], polysaccharide meningococcal vaccine [17], measles-rubella vaccine [18], flu vaccine [19] and hepatitis B vaccine [19]. Since the publishing of the first report of the condition by Guillain, Barré, and Strohl in 1916, GBS has remained a clinically-diagnosed disorder. The condition is a polyneuropathy involving mainly motor but sometimes also sensory and autonomic nerves. It starts with rapidly progressive bilateral and relatively symmetric weakness in the lower limbs with diminished or absent deep tendon reflexes. Paralysis follows an ascending pattern involving trunk, upper limb and, finally, bulbar muscles. There can be numbness, parathesia and muscle pain and tenderness. Labile blood pressure with postural hypotension and labile heart rate with episodes of bradycardia up to asystole rarely occur denoting autonomic neuropathy. van Doorn et al. categorize diagnostic features of the condition into features required for diagnosis, including progressive weakness in both arms and legs, and areflexia or hyporeflexia, and features that strongly support the diagnosis, including progression of symptoms to a nadir over days to four weeks, relative symmetry of symptoms, mild sensory symptoms or signs, cranial nerve involvement, autonomic dysfunction, pain, a high concentration of protein in CSF without increase in cells and typical electro-diagnostic features [20]. A common, yet not an early, feature of GBS is increased cerebrospinal fluid (CSF) protein (> 45 mg/dL) without CSF pleocytosis (< 10 cells/mm3), often referred to as cytoalbuminous dissociation [21]. Electromyography may be used to confirm the diagnosis in the small subset of patients where the diagnosis is not straightforward. It is also useful to sub-classify patients into motor axonal neuropathy and acute inflammatory demyelinating polyneuropathy [22]. A recent study conducted in Egypt on children with GBS found that acute inflammatory demyelinating polyneuropathy, was the most common type (76%) while, acute motor axonal neuropathy, acute motor sensory axonal neuropathy and unclassified forms represented 8% each [23]. Respiratory failure is one of the most serious complications of GBS. It affects 15% of children with the condition [24]. The ability to predict the occurrence of respiratory failure and need for mechanical ventilation (MV) among patients with GBS has long been a target for neurologists and intensivists alike. Some bedside indicators of the likelihood of requiring MV are rapid disease progression, bulbar dysfunction, bilateral facial weakness, or dysautonomia, inability to stand, inability to lift the elbows or head, elevated liver enzymes and abnormal pulmonary function test [25-27]. Electrophysiological evidence of demyelination was also suggested to predict the need for endotracheal MV [28]. Treatment of GBS is a multidisciplinary effort. The general care of the child with muscle weakness includes regular monitoring of pulmonary and cardiovascular function for possible involvement of respiratory muscles and autonomic neuropathy respectively, prevention of infection and deep vein thrombosis, pain management, early physiotherapy, early rehabilitation once muscle power improvement starts and psychosocial support for the affected children and their families [20]. Specific treatment of severely affected children comprises intravenous immunoglobulin (IVIG) or plasma exchange (PE). Performed on more than 200 patients each, two large trials in the mid-1980s confirmed the beneficial role of PE in treatment of patients with GBS compared to the conventional treatment of the time [29,30]. The routine use of IVIG as the first line of treatment in GBS followed the publication of a randomized controlled trial (RCT) in 1992 showing a similar, if not a superior, effect of IVIG compared to PE [31]. A recent systematic review of heterogeneous study populations showed there was no evidence for a better outcome with either of the two modalities of treatment. There is also no evidence of a beneficial effect of adding corticosteroids to the treatment [32]. The objective of this study is to compare PE and IVIG as a first line treatment for children with severe GBS requiring MV regarding the duration of MV, PICU stay and short term neurological outcome.\n\nBODY.MATERIALS AND METHODS:\nChildren with GBS admitted to the Pediatric Intensive Care Unit (PICU) at Mansoura University Children Hospital, Mansoura, Egypt, with the need for MV were prospectively enrolled in the study. Cases were diagnosed to have GBS according to clinical criteria by van Doorn et al. [20]. Patients were eligible for inclusion if they required MV based on the indications listed below, and if the duration of muscle weakness did not exceed 14 days on enrollment. Children were not eligible for inclusion if they had muscle weakness for longer than 14 days before requiring MV and if IVIG or PE was started prior to enrollment. All patients were ventilated using endotracheal MV. Children were intubated if they were unable to protect their airway, had increased work of breathing (WOB), had PaO2 less than 70 mmHg in room air requiring increasing FiO2, or showed CO2 retention. When children were able to trigger spontaneous breathing, they were changed to a pressure-support spontaneous ventilation mode with continuous positive airway pressure of 5 cm H2O. Pressure support was gradually weaned to 10 cm H2O. If secretions were manageable and airway reflexes intact, a daily spontaneous breathing trial (SBT) was performed using a T-piece for two hours. Patients were extubated if SBT was successful. An SBT was deemed successful if there was no diaphoresis, increased WOB or apnea, tachycardia (defined by increase in the heart rate of 40 bpm or more) and if SpO2, pH, PaO2 and PaCO2 remained close to pre-SBT values. The decisions to initiate, wean and terminate MV were made independently by the attending consultant in accordance with and in strict adherence to the unit guidelines as above. Over a period of three years, from January 2007 to December 2009, 44 children were admitted with GBS requiring MV; 41 children fulfilled inclusion criteria and were randomized to receive either IVIG (20 children) or PE (21 children) for initial treatment. Randomization was done by computer-generated random tables. IVIG was administered in a dose of 0.4 g/kg/day for five days. Patients in the PE group received a daily one-volume PE for five consecutive days. The sample size yields a study power of 86.9%. On admission, the muscle power was recorded using the Medical Research Council (MRC) sum score [33]. Lumbar puncture (LP) was performed in the second week of illness in 36 out of the 41 children (18 in each group). LP was performed at the discretion of the attending consultant. The primary outcome measure was the duration of mechanical ventilation and secondary outcome measures were length of PICU stay and ability to walk unaided within four weeks of PICU discharge. Institutional research ethics committee approval was granted and an informed consent was obtained from legal guardians of children included in the study. Statistical analysis was performed using Statistical Package for the Social Sciences, Version 17 (SPSS, Inc., Chicago, IL, USA). p-value was considered significant if less than .05.\n\nBODY.RESULTS:\nChildren in the two treatment groups were comparable as regards base line characteristics, such as age, weight, duration of illness prior to requiring mechanical ventilation, presence or absence of preceding diarrhea, cranial nerve affection, severity of muscle weakness on PICU admission and CSF protein when available (Table 1). Table 1 Comparison of IVIG and PE groups regarding base line variables IVIG group ( n = 20) PE group ( n = 21) p- value Age (months) 106.0 ± 22.8 (74.3 to 113.5) 96.0 ± 32.8 (65.5 to 135.0) 0.764 1 Weight (kg) 32.5 ± 7.0 (23.0 to 34.8) 29.0 ± 10.1 (20.0 to 41.5) 0.764 1 DOI (days) 9.0 ± 2.7 (7.0 to 12.0) 9.0 ± 2.8 (6.5 to 11.5) 0.854 1 Diarrhea (%) Yes 13 (65%) 12 (57.1%) 0.751 2 No 7 (35%) 9 (42.9%) Cranial nerve affection (%) Yes 8 (40%) 10 (47.6%) 0.756 2 No 12 (60%) 11 (52.4%) MRC sum score 12.0 ± 4.8 (8.5 to 15.5) 12.0 ± 5.9 (4.0 to 12.0) 0.266 1 CSF protein (mg/dL) 152.1 ± 55.7 (88.3 to 173.6) 148.4 ± 43.0 (117.5 to 166.5) 0.606 1 IVIG, intravenous immunoglobulin; PE, plasma exchange; DOI, Duration of illness prior to requiring mechanical ventilation; MRC, Medical Research Council score. Data presented as median ± SD (interquartile range) 1 Mann Whitney test 2 Chi square test. There was a statistically insignificant better outcome in the PE group compared to the IVIG group. Favorable outcome was defined as the child's ability to walk independently for 10 meters (Grade 2 on GBS disability score) [34] within four weeks from PICU discharge. The post discharge management and physiotherapy were standardized for children in the two groups. In the IVIG group, favorable outcome was observed in 90% (18/20) of children compared to 95.2% (20/21) in the PE group (Table 2). Table 2 Comparison of PICU stay and duration of mechanical ventilation in the IVIG and PE groups PICU stay (days) Mechanical ventilation (Days) Favorable outcome (%) Yes No IVIG group 16.5 ± 2.1 (15.3 to 18.8) 13.0 ± 2.1 (11.3 to 14.5) 18 (90%) 2 (10%) PE group 15.0 ± 2.6 (13.0 to 17.0) 11.0 ± 1.5 (11.0 to 13.0) 20 (95.2%) 1 (4.8%) p- value .094 1 0.037 1 0.606 2 IVIG, intravenous immunoglobulin; PE, plasma exchange. Data presented as median ± SD (interquartile range) 1 Mann Whitney test 2 Chi square test. Children receiving PE had a significantly shorter period of mechanical ventilation compared to those receiving IVIG. The hospital stay was shorter in the PE group without statistical significance (Table 2). For all patients, there was a statistically significant negative correlation between CSF protein and duration of mechanical ventilation (p = .024). Examining the treatment groups separately, the significant negative correlation remained for the PE group (p-value = 0.037) and not for the IVIG group (p-value 0.132) (Table 3). Table 3 Correlation between CSF protein and duration of mechanical ventilation All patients IVIG group PE group CSF protein (mg/dL) 149.3 ± 49.0 (110.5 to 169.0) 152.1 ± 55.7 (88.3 to 173.6) 148.4 ± 43.0 (117.5 to 166.5) Mechanical ventilation (days) 12.0 ± 1.9 (11.0 to 13.0) 13.0 ± 2.1 (11.3 to 14.5) 11.0 ± 1.5 (11.0 to 13.0) p -value 0.024 1 0.132 1 0.037 1 CSF, cerebrospinal fluid; IVIG, intravenous immunoglobulin; PE, plasma exchange. Data presented as median ± SD (interquartile range) 1 Pearson test. There was no significant side effect attributable to any of the treatment modalities in any of the studied patients.\n\nBODY.DISCUSSION:\nRespiratory failure among children with GBS is a serious complication requiring intensive supportive treatment in addition to the specific treatment. The two treatment groups had no significant complications attributable to treatment intervention apart from minor hypotension episodes responding to fluid boluses in the PE group. This confirms the safety of both IVIG and PE for treating children with GBS. The results of this study suggest that PE is more useful than IVIG as a specific treatment for the subset of children with severe rapidly progressive GBS requiring MV. PE is believed to act by removal of circulating autoantibodies [35,36], while IVIG, among other mechanisms, is thought to work through blocking antibody production both in vivo [37] and in vitro [38,39]. Removal of autoantibodies, by PE, creates a concentration gradient between the lowered blood level and the extravascular space forcing antibody movement from the extra to the intra vascular space to be removed during the subsequent session [40]. Specific anti-ganglioside antibody levels were recently found to be associated with disease severity [41]. Children with severe, rapidly progressive GBS, as those included in this study, most likely have an intense autoantibody production with a high percentage of these antibodies already bound to nerves on development of respiratory failure. This might provide an explanation of why this subset of patients preferentially benefit from removal of already bound antibodies, PE, in comparison to blocking antibody production, IVIG. The clinical usefulness of this relatively small difference in the duration of mechanical ventilation between PE and IVIG groups waits to be confirmed in a larger, probably multi-center, study. CSF protein was elevated in all patients (Table 3) with no statistically significant difference between the two treatment groups (Table 1). On post hoc analysis, there was, however, a negative correlation between CSF protein level and duration of MV in PE but not IVIG group. This finding supports the above explanation of a shorter duration of MV in the PE group especially in view of a RCT showing CSF filtration to be at least as effective as PE for treatment of GBS [42] and the suggestion of a blood-CSF barrier dysfunction in adults and children with GBS [43]. PE group had a tendency for a shorter PICU stay, p = 0.094 with a comparable ability of children to walk unaided within four weeks from PICU discharge (Table 2). So, despite a shorter duration of MV, there was no difference between PE and IVIG groups regarding PICU stay or short term neurological outcome. To our knowledge, this is the first randomized controlled study comparing PE and IVIG for treatment of children with rapidly progressive GBS requiring MV. An RCT conducted in adults with GBS showed no difference in the duration of MV in PE, IVIG and combined treatment groups [44]. This difference can be accounted for by differences in the age and PE protocol used. It would have been more informative to perform an electrophysiological study for the study children to confirm homogeneity of the study population. It would also add to the understanding of these results to study kinetics of autoantibodies in future research with a similar design.\n\nBODY.CONCLUSIONS:\nRapidly progressive GBS necessitating MV in children responds favorably to both IVIG and PE, with PE being superior in regard to the duration of MV but not PICU stay or short term neurological outcome. The finding of a negative correlation between CSF protein values and duration of MV in the PE group might serve as a basis for future research aiming at a better understanding of autoantibody kinetics during treatment of GBS.\n\nBODY.KEY MESSAGES:\n• Children with GBS requiring MV respond favorably to both IVIG and PE. • Using a weaning and extubation protocol, children receiving PE had a shorter duration of MV.\n\nBODY.ABBREVIATIONS:\nCSF: cerebrospinal fluid; DOI: duration of Illness; GBS: Guillain-Barré syndrome; IVIG: intravenous immunoglobulin; LP: lumbar puncture; MRC: Medical Research Council; MV: mechanical ventilation; PE: plasma exchange; PICU: pediatric intensive care unit; SBT: spontaneous breathing trial; WOB: work of breathing.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nMAE formulated the research idea, designed the plan, performed statistical analysis and proofread the draft. AME co-designed the plan and supervised the plasma exchange making sure of uniformity of the procedure in all studied subjects and proofread the draft. AMA followed up patients on PICU, made sure the plan was followed, and helped MAE in performing statistical analysis. ME followed up patients on PICU, drafted the manuscript, and designed statistical methodology. AAA co-formulated the research idea, co-designed the plan, followed up patients on PICU, and generated and was responsible for the random assignment of the groups. HME followed up patients on PICU, co-drafted the manuscript with ME and made sure PICU management of patients complied with the unit guidelines. All authors read and approved the final manuscript.\n\nBODY.AUTHORS' INFORMATION:\nHME is one of the founding members and the current president of the ESPIC (Egyptian Society of Pediatric Intensive Care). MAE is a member of the Board of Directors of WFPICCS (World Federation of Pediatric Intensive Care Societies) and the ambassador of WFPICCS in Egypt. An abstract of this article has been presented by MAE as an oral presentation at the 6th World Congress of Pediatric Intensive Care, Sydney, 2011.\n\n**Question:** Compared to intravenous immunoglobulin (IVIG) what was the result of plasma exchange (PE) on negative correlation between CSF protein and duration of mechanical ventilation?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
273
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomized Clinical Trial of Laparoscopic Versus Open Repair of the Perforated Peptic Ulcer: The LAMA Trial\n\n ABSTRACT.BACKGROUND:\nLaparoscopic surgery has become popular during the last decade, mainly because it is associated with fewer postoperative complications than the conventional open approach. It remains unclear, however, if this benefit is observed after laparoscopic correction of perforated peptic ulcer (PPU). The goal of the present study was to evaluate whether laparoscopic closure of a PPU is as safe as conventional open correction.\n\nABSTRACT.METHODS:\nThe study was based on a randomized controlled trial in which nine medical centers from the Netherlands participated. A total of 109 patients with symptoms of PPU and evidence of air under the diaphragm were scheduled to receive a PPU repair. After exclusion of 8 patients during the operation, outcomes were analyzed for laparotomy (n = 49) and for the laparoscopic procedure (n = 52).\n\nABSTRACT.RESULTS:\nOperating time in the laparoscopy group was significantly longer than in the open group (75 min versus 50 min). Differences regarding postoperative dosage of opiates and the visual analog scale (VAS) for pain scoring system were in favor of the laparoscopic procedure. The VAS score on postoperative days 1, 3, and 7 was significant lower (P < 0.05) in the laparoscopic group. Complications were equally distributed. Hospital stay was also comparable: 6.5 days in the laparoscopic group versus 8.0 days in the open group (P = 0.235).\n\nABSTRACT.CONCLUSIONS:\nLaparoscopic repair of PPU is a safe procedure compared with open repair. The results considering postoperative pain favor the laparoscopic procedure.\n\nBODY.INTRODUCTION:\nThe incidence of perforated peptic ulcer (PPU) has declined over the past several years because of the introduction of anti-ulcer medication and Helicobacter eradication therapy [1, 2]. Nevertheless the incidence and mortality of PPU is 5–10%. The mortality will increase up to 50% if the perforation exists for more than 24 h [3, 4]. There are several options for treatment of PPU, but the preferred treatment is surgery by upper abdominal laparotomy [5, 6]. Mouret et al. published the first results of laparoscopic repair in 1990 [7]. He concluded that it was a good method that probably reduced postoperative wound problems and adhesions. After the success of laparoscopic cholecystectomy and other laparoscopic procedures, it was thought that patients would have less pain and a shorter hospital stay after laparoscopic correction of PPU [8, 9]. Various studies have shown that laparoscopic suturing of the perforation is feasible, but there is still no proof of real benefits of laparoscopic correction [3, 6, 10–12]. Therefore we performed a multicenter randomized trial comparing open correction of PPU with laparoscopic repair.\n\nBODY.METHODS.PARTICIPANTS:\nPatients with symptoms of the clinical diagnosis of PPU were included in nine medical centers in the Netherlands participating in the LAMA (LAparoscopische MAagperforatie) trial between March 1999 and July 2005. Eligible patients were informed of the two surgical approaches and were invited to participate in the study. Exclusion criteria were the inability to read the Dutch language patient information booklet, inability to complete informed consent, prior upper abdominal surgery, and current pregnancy. The ethics committees of all participating institutions approved the trial.\n\nBODY.METHODS.RANDOMIZATION:\nSurgeons contacted the study coordinator after the patients had provided informed consent and randomization took place by opening a sealed envelope. The envelope randomization was based on a computer-generated list provided by the trial statistician.\n\nBODY.METHODS.SURGICAL PROCEDURE:\nAll patients received intravenous antibiotics prior to operation and were allocated for Helicobacter pylori eradication therapy according to established guidelines [13]. The open surgical procedure was performed through an upper abdominal midline incision. Closure of PPU was to be achieved by sutures alone or in combination with an omental patch. After repair of the defect cultures were drawn from the peritoneal fluid, after which the peritoneal cavity was lavaged. During lavage it was permissible to insufflate the stomach to test for leakage of the closed defect. No method was specified for closing the abdomen. Laparoscopic repair was performed with the patient and the team set up in the \"French\" position. Trocars were placed at the umbilicus (video scope) and on the left and right midclavicular line above the level of the umbilicus (instruments). If necessary a fourth trocar was placed in the subxiphoid space for lavage or retraction of the liver. Surgeons were free to use either 0° or 30° video scopes for the procedure. The rest of the procedure was identical to that described above for open repair. No method was specified for closing the trocar incisions.\n\nBODY.METHODS.POSTOPERATIVE FOLLOW-UP:\nPostoperative pain was scored by means of a visual analog scale (VAS) for pain on days 1, 3, 7, and 28 ranging from 0 (no pain) to 10 (severe pain). In addition, the days during which opiates were used by the patients were registered. All complications, minor and major, were monitored. The treating surgeons determined time of discharge on the basis of physical well-being, tolerance of a normal diet, and ability to use the stairs. For this reason, this was an unblinded trial. Postoperative hospital stay without correction for time spent in hospital as a result of non-medical reasons (inadequate care at home) was calculated. Patients were invited to attend the outpatient clinic at 4 weeks, 6 months, and one year postoperatively. They were asked to complete forms related to pain and use of analgesics.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData analysis was carried out according to the intention-to-treat principle as established in the trial protocol. Data were collected in a database, and statistical analyses were performed with the Statistical Package for Social Sciences for Windows (SPSS 15.0, SPSS Inc., Chicago, IL). A researcher blinded to the nature of the procedures performed all data analyses. The primary outcome of the trial was duration of hospital stay. The power analysis was performed on basis of a reduction in hospital stay by 1.5 days (10–8.5 days from admission) in favor of the laparoscopically treated group using a β of 0.80 and an α of 0.05. This resulted in a trial size of 50 patients per group. The Pearson chi-squared test was used to compare categorical variables, and the Mann-Whitney U-test was used to compare continuous variables as we could not assume normal distribution because of the relatively small numbers. In Tables 1–6 medians and interquartile ranges (IQR) are reported. All data were analyzed according to the intention-to-treat principle; i.e., patients remained in their assigned group even if during the procedure the surgeon judged the patient not to be suitable for the technique assigned or if conversion was required. Null hypotheses were tested two-sided and a P value of 0.05 or less was considered statistical significant.Table 1Baseline parametersLaparoscopic repairOpen repairP valuen = 52 n = 49 Male:female ratio1.3:11.9:1Median age (years) + IQR 66 (25.8)59 (29.5)0.185Median BMI (kg/m2) + IQR23 (4)22 (5)0.118Median duration of symptoms (h) + IQR11 (17)11 (19)0.948Median blood pressure systolic (mmHg) + IQR125 (38.5)130 (36.5)0.457Median blood pressure diastolic (mmHg) + IQR75 (25.5)75 (24.5)0.596Median heart rate (beats/min) + IQR 88 (34.0)92 (21)0.403Median body temperature (°C) + IQR 36.9 (0.92)36.8 (1.5)0.658Mannheim Peritonitis Index + IQR19.5 (8.25)16 (14)0.386Median white cell count (×109/l) + IQR 12.1 (8.9)12.1 (7.75)0.467Median ASA score + IQR 1.0 (1.0)1.5 (1.0)0.902IQR interquartile range, difference between 25th percentile and 75th percentile; BMI body mass indexASA American Society of Anesthesiologists Association score\n\nBODY.RESULTS.PATIENTS:\nA total of 109 patients were included in the trial based on a high suspicion of PPU (Fig. 1). Eight patients were excluded during operation because no gastric perforation was detected or a defect in other parts of the digestive tract was uncovered. Data for these patients were not collected and the patients were excluded from further analysis. The remaining 101 patients made up the study population; their baseline parameters are given in Table 1. Fifty-two patients were randomized for laparoscopic repair and 49 for open repair of the perforation. Forty patients were female. The mean age of the patients was 61 years. The BMI (body mass index) was equally distributed between the groups, with a median of 22.5. Patients in both groups had been suffering from symptoms for a mean duration of 11 h, and those in the laparoscopy group presented with a median Mannheim Peritonitis index [14] of 19.5, whereas those in the open group had a median Mannheim Peritonitis index of 16.Fig. 1Patient flow chart Thirty patients reported the use of non-steroidal anti-inflammatory drugs (NSAIDs; 17 laparoscopic versus 13 open), and 10 patients used proton pump inhibitors (6 laparoscopic versus 4 open). Patient history revealed gastric ulcer disease in 19 patients.\n\nBODY.RESULTS.INTRAOPERATIVE FINDINGS:\nThe discovered ulcer perforations were found to have a mean diameter of 10 mm, which did not differ between groups (Table 2). Location of the perforated ulcers was distributed equally between groups. Defects were located in the prepyloric region (n = 41), the postpyloric region (n = 34), and at the pylorus (n = 20). The median volume of lavage fluid used was 1,000 ml (range: 100–5,000 ml). The surgeon decided the amount of lavage used. There was no consensus on how much was necessary. Median blood loss did not differ between groups. Skin-to-skin time differed by 25 min, favoring open repair of PPU (Table 2).Table 2Intraoperative findingsLaparoscopic repairOpen repairP valuen = 52n = 49Median size of perforation (mm) + IQR10.0 (7.0)7.0 (6.0)0.379Number of patients with defect Pyloric812 Postpyloric2014 Prepyloric1922 Median volume of lavage (ml) + IQR1,000 (1,500)1,000 (1,425)1.000 Median bloodloss (ml) + IQR10.0 (40.0)10.0 (50.0)0.423 Skin to skin time (min) + IQR75 (47.2)50 (25.5)0.000\n\nBODY.RESULTS.INTRAOPERATIVE COMPLICATIONS:\nConversion to open surgery was required in four patients (8%). Reasons for conversion included the inability to visualize the ulcer defect because of bleeding (n = 1/52; 2%), inability to reach the defect because of perforation in the vicinity of the gastroduodenal ligament and because of a dorsal gastric ulcer (n = 2/52; 4%), and inability to find the perforation (n = 1/52; 2%).\n\nBODY.RESULTS.POSTOPERATIVE COMPLICATIONS:\nComplications were statistically equally distributed between the two groups (Table 3). There were 12 complications in 9 patients in the laparoscopic group and 24 complications in 15 patients in the open group. Mortality was 4% in the laparoscopic group and 8% in the open group. In the laparoscopic group death was caused by sepsis due to leakage at the repair site. In the open group 3 patients died because of pulmonary problems (ARDS, pneumonia), and 1 patient died after complications following a cerebrovascular accident (CVA) combined with respiratory insufficiency.Table 3Postoperative complicationsLaparoscopic repairOpen repairP valuen = 52n = 49Pneumonia21Respiratory insufficiency13ARDS1Cardiac problems22Sepsis31Leakage at repair site2Abscess3Ileus1Fascial dehiscence1Wound infection3Urinary tract infection2Incisional hernia1Cerebrovascular accident1Death24Total complications12240.061Total of patients with complications ≥19 (18%)15 (36%)\n\nBODY.RESULTS.DISCHARGE:\nTime to discharge was similar for the two groups, with a median difference of 1.5 days (Table 4). Nasogastric decompression could be stopped after 2–3 days in both groups (Table 4).Table 4Duration of hospital stay, nasogastric decompressionLaparoscopic repairOpen repairP valuen = 52n = 49Median hospital stay (days) + IQR6.5 (9.3)8.0 (7.3)0.235Median duration of nasogastric decompression (days) + IQR2.0 (3.0)3.0 (1.3)0.334\n\nBODY.RESULTS.PAIN:\nVisual analog pain scores were in favor of laparoscopic repair (Table 5; p < 0.005). Although the median duration of opiate use in the two groups was 1.0, the mean duration in the open group was found to be 0.6 days longer than in the laparoscopic group (Table 6).Table 5Postoperative painMedian VAS pain scoreMedian VAS pain scoreP valueLaparoscopic repairOpen repairDay 1 + IQR3.8 (3.0)5.15 (2.5)0.001Day 3 + IQR2.1 (2.5)3.0 (2.4)0.035Day 7 + IQR1.0 (2.0)1.85 (2.8)0.036Day 28 + IQR0.3 (0.7)0.0 (1.7)0.748Table 6Postoperative opiate usageOpiate requirementOpiate requirementP valueLaparoscopic repairOpen repairMedian duration (days) + IQR1.0 (1.25)1.0 (1.0)0.007Mean duration (days) ± SD1.0 ± 0.91.6 ± 0.90.007\n\nBODY.RESULTS.VAS APPEARANCE OF SCAR:\nThe VAS score for appearance of the scar left by the respective procedures (subjectively recorded in the same way as pain) differed by 2.3 points, favoring the laparoscopic procedure (7.7 vs. 5.4; P = 0.033)\n\nBODY.DISCUSSION:\nThe need for surgery for PPU has declined enormously in Europe and America with reported rates ranging from 50% to 80%, thanks to effective gastric acid-reducing medication [15]. For this reason, as well as because many surgeons prefer upper laparotomy, it took more time than expected to include 100 patients in our study. Reasons given by surgeons who prefer open repair were that it is a more familiar procedure and it can be completed faster than laparoscopy. It was also noted that patients often undergo operation at night, when the surgeon on call was not always laparoscopically trained. Other randomized trials have already shown the feasibility of laparoscopic repair of PPU [3, 4, 6, 10]. Only a few had more than 100 patients, and some emphasized results from subgroups of patients [8, 11, 12]. We did not subdivide our patients and included patients with risk factors for developing sepsis or conversion [10]. In eight of the original 109 patients (7%) it became evident during the operation that the patient had a diagnosis different from PPU. In the patients who were randomized for laparoscopy this discovery revealed the benefit of laparoscopy as a diagnostic procedure indicating either an upper or lower abdominoplasty or continuation of the laparoscopy [16]. Conversion rate in the laparoscopy group was 8% (4/52). This is much lower than that reported in literature, where conversion rates as high as 60% were found [3, 4, 6]. This maybe partially explained by the fact that only trained and experienced laparoscopic surgeons (those performing at least 50 laparoscopic procedures a year) participated in this trial, confirming the belief that this procedure should only be done by experienced surgeons [3–5]. Operating time was significantly longer in the laparoscopy group (75 min versus 50 min), which is comparable to reports in the literature [3, 10]. A possible explanation for the longer operative time is that laparoscopic suturing is more demanding [9, 17], especially if the edges of the perforation are infiltrated and friable. Sutures easily tear out and it is more difficult to take large bites and to tie knots properly. Use of a single-stitch method described by Siu et al. [18], fibrin glue, or a patch might solve this problem [12, 19]. Another reason for the increase in operating time is the irrigation procedure. Irrigating through a 5-mm or even a 10-mm trocar is time consuming, and suction of fluid decreases the volume of gas and reduces the pneumoperitoneum. There is no evidence that irrigation lowers the risk of sepsis [20], so it might only be necessary if there are food remnants in the abdomen; perhaps there is no need for it at all. One of the suspected risks of laparoscopic surgery is that of inducing sepsis by increasing bacterial translocation while establishing a pneumoperitoneum [6]. This danger could not be confirmed in our trial. Furthermore data suggest that there is a decrease in septic abdominal complications when laparosopic surgery is used [8]. Evidence already exists that laparoscopic correction of PPU causes less postoperative pain [6, 12, 17, 18]. The meta-analysis published by Lau [6] showed that eight out of ten studies showed a significant reduction in dosage of analgesics required in the laparoscopic group. Also, the three studies that had included VAS pain scores showed consistently lower pain scores, as was observed in our study as well. Whether this will lead to a better quality of life for patients, especially during the first weeks after surgery still needs to be analyzed. Although patients in our series who underwent laparoscopy had less postoperative pain, there was no difference in the length of hospital stay in our two groups. In fact, hospital stay overall in our patients was very long. This was most likely caused by the fact that many patients, especially the elderly, could not be discharged because of organizational reasons. Of the 101 patients, 41% were 70 years or older (24 in the laparoscopic group versus 17 in the open group). It appears that the age of PPU patients is increasing, and this will eventually represent a significant problem in the future [2, 3]. One benefit of the laparoscopic procedure not often mentioned in literature [6] is cosmetic outcome. Nowadays patients are aware of this benefit, and sometimes this is the reason why they demand laparoscopic surgery. In conclusion, the results of the LAMA trial confirm the results of other trials that laparoscopic correction of PPU is safe, feasible for the experienced laparoscopic surgeon, and causes less postoperative pain. Operating time was longer in the laparoscopic group and there was no difference in length of hospital stay or incidence of postoperative complications.\n\n**Question:** Compared to Conventional open approach what was the result of Laparoscopic surgery on Time to nasogastric decompression?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
456
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The Effect of Clofibrate on Decreasing Serum Bilirubin in Healthy Term Neonates under Home Phototherapy\n\n ABSTRACT.OBJECTIVE:\nThis study was designed to determine the effect of clofibrate on neonatal uncomplicated jaundice treated with home phototherapy.\n\nABSTRACT.METHODS:\nThis clinical trial study was performed on 60 newborns with jaundice that received home phototherapy. Inclusion criteria were body weight between 2500 to 4000 gr, breastfed, total serum bilirubin (TSB) between 14 to 20 mg/dl, aged over 72 hours. The neonates were randomly divided into two groups. All received home phototherapy. Group I received a single dose of 50 mg/kg clofibrate and the other group served as control group. Total serum bilirubin level was measured every 24 hours.\n\nABSTRACT.FINDINGS:\nTwo groups were matched regarding weight, sex, age and first TSB. At 24 and 48 hours of treatment, the mean values of TSB in the clofibrate group were 13.72 (1.56), 9.5 (0.56) and in the control group 15.30 (1.44), 12.6 (1.44). The results show that TSB was significantly decreased after 24 and 48 hours in clofibrate group (P<0.001). The mean duration of phototherapy in group I was 72(0.0) hours and in the control group 76.80 (±9.76) hours. The duration of phototherapy was significantly shorter in clofibrate group (P<0.001).\n\nABSTRACT.CONCLUSION:\nClofibrate is effective for outpatients with neonatal hyperbilirubinemia who are under home phototherapy. Of course, further studies are needed for approved routine use of this drug in the treatment of neonatal jaundice.\n\nBODY.INTRODUCTION:\nIn the first day of life, bilirubin production is increased to an estimated average of 8 to 10 mg/kg of body weight per day, an amount about two or three times greater than that of adults[1]. Approximately two thirds neonates become clinically jaundiced. Phototherapy is the most widely used form of therapy for the treatment and prophylaxis of neonatal hyperbilirubinemia[2]. Several potential complications may occur with the use of phototherapy[3], such as retinal degeneration, fluid imbalance, and bronze baby syndrome. Some reports have demonstrated that home phototherapy may be an effective and safe alternative to prolonged hospitalization for healthy full-term neonates with jaundice. Clear advantages of home-centered phototherapy include: (1) reduced coast; (2) avoidance of parent-infant separation; and (3) parental satisfaction[4]. Although some pharmacological agents such as activated charcoal[5] or agar[6] are suggested to treat neonatal jaundice, further study of this type of therapy is needed before to their routine clinical use. Glucuronyl transferase activity can be increased with administration of phenolbarbital[7] or clofibrate[8]. Clofibrate decreases serum cholesterol and triglyceride levels in adults[9]. This drug has been proposed for treatment of neonatal hyperbilirubinemia[10]. Mohammadzadeh et al studied the effect of clofibrate on reducing serum bilirubin of neonates beyond the first week of life[11]. Badeli and colleagues determined the effect of clofibrate on uncomplicated hyperbilirubinemia of neonates during the first week of life[12]. Mohammadzadeh and badeli studied the effect of 100mg/kg clofibrate. The aim of the present study was determination the effect of 50 mg/kg of clofibrate on tsb of hyperbilirubinemia of neonates under home phototherapy.\n\nBODY.SUBJECTS AND METHODS:\nFrom April 4, 2007 to June 20, 2008 60 neonates with uncomplicated hyperbilirubinemia who were under home phototherapy in rasht, Iran, entered our study. The ethics committee of our university approved the study. Informed consent of the parents was obtained. Inclusion criteria were body weight between 2500–4000 grams with gestational age 38 to 41 weeks, breastfed, and having total serum bilirubin (TSB) between 14 to 20 mg/dl, with postnatal age above 72 hours. Infants with abo or rh incompatibility, g6pd deficiency, and conjugated hyperbilirubinemia or any concomitant disease were excluded. Portable phototherapy units consist of 4 special blue tubes placed 40 cm above the infant as home phototherapy. Patients were examined and a sample of serum bilirubin was taken daily. The neonates (n=60) were randomly divided into two groups and they were matched regarding weight, sex, age and value of the first TSB. 30 neonates were allocated to clofibrate group (group I) and 30 neonates to control group (group II). All neonates in the two groups received home phototherapy. The control group received no placebo. Clofibrate group (group I) was given a single dose of 50 mg/kg clofibrate before starting phototherapy. In the first day, laboratory tests included estimation of total bilirubin (direct and indirect), blood group including rh of the mother and neonate, complete blood count (cbc), g6pD activity, coombs test, reticulocyte count. Direct and indirect, as well as tsb were measured every 24 hours by the same laboratory till tsb dropped under 10 mg/dl. The duration of phototherapy was recorded with a timer. Data were analyzed using spss 10; t-test and ANOVA were used for analyzing the data. P-value less than 0.05 was considered significant.\n\nBODY.FINDINGS:\nAmong 60 neonates, 30 infants consisting of 18 (60%) females and 12 (40%) males belonged to group I (clofibrate) and 17 (56.7%) females and 13 (43.3%) males to group II (control). There were no statistically significant differences between the two groups regarding weight, age and first tsb value (Table 1). TBS values show significant difference between the two groups 24 and 48 hours after starting phototherapy at home. Group I showed lower value (Table 2). Mean (±SD) phototherapy time in group I was 72(±0.0) hours and in group II 76.80 (±9.76) hours.\n Table 1 Mean (±SD) age, weight and first TSB- value in the two groups Parameter Group I Group II Mean (±SD ‡ ) Mean (±SD) \n Age (day) \n 6.80 (3.24) 6.6 (2.50) \n Weight (g) \n 3107 (394.89) 3150 (469.40) \n TSB * (mg/dl) \n 17.24 (1.48) 17.42 (1.44) ‡ SD: standard deviation * TSB: total serum bilirubin Table 2 Comparison of mean total serum bilirubin (mg/dl) after 24 and 48 hours of phototherapy in the two groups Time (hour) Group n Mean(SD * ) total bilirubin (mg/dl) \n p - value \n 24 \n \n Clofibrate(I) \n 30 13.72 (1.56) <0.001 \n Control (II) \n 30 15.30 (1.44) \n 48 \n \n Clofibrate(I) \n 30 9.5 (0.56) <0.001 \n Control (II) \n 30 12.6 (1.44) * Standard Deviation The duration of phototherapy was significantly shorter in clofibrate group (P<0.001). All neonates in group I needed phototherapy still after 72 hours but in group II 24 (80%) neonates had to receive phototherapy for 72 and 6 (20%) for 96 hours. Serum bilirubin levels after physician's examining were measured at beginning and 24, 48 hours after phototherapy. Bilirubin of both groups was measured in the same laboratory. On serial daily examination during phototherapy and up to 2 days after that no problems or side effects were observed. Also for a period of two months no complication was detected.\n\nBODY.DISCUSSION:\nIn this clinical trial study we determined the effect of lower dose (50 mg/kg) of clofibrate on neonatal hyperbilirubinemia in term neonates under home phototherapy. In the present study we demonstrated that in clofibrate group there was lower tsb after 24 and 48 hours home phototherapy compared with control group. Also the neonates in group I needed to receive shorter phototherapy than those in group II. The neonatal hyperbilirubinemia is the most common disease in neonatal period. Although there are advantages of home phototherapy, several potential complications may occur with its use. At present there is no safe drug for treatment of neonatal icterus and shortening of phototherapy time. The effect of numerous drugs on bilirubin metabolism and reducing hyperbilirubinemia has been identified. Metalloporphyrins and d-penicillamine act by inhibition of heme oxygenase, agar and charcoal by decreasing entrohepatic circulation. The clofibrate and phenobarbital are potent inducers of microsomal enzymes that increase bilirubin conjugation and excretion[13]. Clofibrate like phenobarbital is a hepatic bilirubin metabolism inducer, in addition causes 100% increase of hepatic bilirubin clearance within 6 hours with no drowsiness effect in contrast to the latter. Clofibrate when used as an antilipidemic agent in adults, has some side effects such as nausea, gastrointestinal disturbance, vomiting and loose stools[14]. Other possible complications include cramps, fatigue, pruritus and alopecia[14]. None of these side effects were reported in neonates with a single dose of clofibrate[15, 16]. Like phenobarbital, clofibrate increases bilirubin conjugation and excretion and is a better enhancer of glucuronosyl transferase induction causing 100% increase of hepatic bilirubin clearance within 6 hours[17], sooner than phenobarbital[13]. Phenobarbital has a long half life and its effect on severe jaundice is questionable. Phenobarbital also causes drowsiness in neonates and may slow down the oxidation of bilirubin in the brain leading to worse bilirubin toxicity[18]. Although we detected in our studies no side effects in infants with a single dose of clofibrate, determination of minimum and effective dose would be important. This study demonstrated that a single dose of clofibrate (50mg/kg) significantly reduces indirect hyperbilirubinemia in healthy breastfed term newborns with home-based phototherapy. This result is similar to the studies of mohammadzadeh et al and badeli et al, noting that they used 100 mg/kg of the drug in hospitalized patients. Our study demonstrated that lower dose of clofibrate is effective on neonatal hyperbili-rubinemia receiving home phototherapy as well. Glabilan[17] used clofibrate in the treatment of early jaundice in full term neonates. He found significant reduction in bilirubinemia at 16 hours and decrease in the intensity and duration of jaundice and also phototherapy requirement. Lindenbaum showed significant reduction 16 hours after treatment with clofibrate in TSB of 47 neonates[19]. Other studies in Iran and France confirmed the beneficial effect of clofibrate for reducing of TSB in babies with no risk factor for hemolysis[20]. In conformity with those previous studies, we did not detect any untoward complications during therapy and after two months follow up. Twenty mothers discontinued home phototherapy because of their own concern or others' comments; they were excluded of the study.\n\nBODY.CONCLUSION:\nThis study suggests that a lower dose of clofibrate is effective in outpatients with neonatal hyperbilirubinemia under home phototherapy. Similar to other studies we found no complications. Further studies are needed for approved routine use of this drug in non-hemolytic hyperbilirubinemia of healthy term inewborns as outpatients.\n\n**Question:** Compared to control group what was the result of single dose of 50 mg/kg clofibrate on side effects?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
269
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomized Clinical Trial of Laparoscopic Versus Open Repair of the Perforated Peptic Ulcer: The LAMA Trial\n\n ABSTRACT.BACKGROUND:\nLaparoscopic surgery has become popular during the last decade, mainly because it is associated with fewer postoperative complications than the conventional open approach. It remains unclear, however, if this benefit is observed after laparoscopic correction of perforated peptic ulcer (PPU). The goal of the present study was to evaluate whether laparoscopic closure of a PPU is as safe as conventional open correction.\n\nABSTRACT.METHODS:\nThe study was based on a randomized controlled trial in which nine medical centers from the Netherlands participated. A total of 109 patients with symptoms of PPU and evidence of air under the diaphragm were scheduled to receive a PPU repair. After exclusion of 8 patients during the operation, outcomes were analyzed for laparotomy (n = 49) and for the laparoscopic procedure (n = 52).\n\nABSTRACT.RESULTS:\nOperating time in the laparoscopy group was significantly longer than in the open group (75 min versus 50 min). Differences regarding postoperative dosage of opiates and the visual analog scale (VAS) for pain scoring system were in favor of the laparoscopic procedure. The VAS score on postoperative days 1, 3, and 7 was significant lower (P < 0.05) in the laparoscopic group. Complications were equally distributed. Hospital stay was also comparable: 6.5 days in the laparoscopic group versus 8.0 days in the open group (P = 0.235).\n\nABSTRACT.CONCLUSIONS:\nLaparoscopic repair of PPU is a safe procedure compared with open repair. The results considering postoperative pain favor the laparoscopic procedure.\n\nBODY.INTRODUCTION:\nThe incidence of perforated peptic ulcer (PPU) has declined over the past several years because of the introduction of anti-ulcer medication and Helicobacter eradication therapy [1, 2]. Nevertheless the incidence and mortality of PPU is 5–10%. The mortality will increase up to 50% if the perforation exists for more than 24 h [3, 4]. There are several options for treatment of PPU, but the preferred treatment is surgery by upper abdominal laparotomy [5, 6]. Mouret et al. published the first results of laparoscopic repair in 1990 [7]. He concluded that it was a good method that probably reduced postoperative wound problems and adhesions. After the success of laparoscopic cholecystectomy and other laparoscopic procedures, it was thought that patients would have less pain and a shorter hospital stay after laparoscopic correction of PPU [8, 9]. Various studies have shown that laparoscopic suturing of the perforation is feasible, but there is still no proof of real benefits of laparoscopic correction [3, 6, 10–12]. Therefore we performed a multicenter randomized trial comparing open correction of PPU with laparoscopic repair.\n\nBODY.METHODS.PARTICIPANTS:\nPatients with symptoms of the clinical diagnosis of PPU were included in nine medical centers in the Netherlands participating in the LAMA (LAparoscopische MAagperforatie) trial between March 1999 and July 2005. Eligible patients were informed of the two surgical approaches and were invited to participate in the study. Exclusion criteria were the inability to read the Dutch language patient information booklet, inability to complete informed consent, prior upper abdominal surgery, and current pregnancy. The ethics committees of all participating institutions approved the trial.\n\nBODY.METHODS.RANDOMIZATION:\nSurgeons contacted the study coordinator after the patients had provided informed consent and randomization took place by opening a sealed envelope. The envelope randomization was based on a computer-generated list provided by the trial statistician.\n\nBODY.METHODS.SURGICAL PROCEDURE:\nAll patients received intravenous antibiotics prior to operation and were allocated for Helicobacter pylori eradication therapy according to established guidelines [13]. The open surgical procedure was performed through an upper abdominal midline incision. Closure of PPU was to be achieved by sutures alone or in combination with an omental patch. After repair of the defect cultures were drawn from the peritoneal fluid, after which the peritoneal cavity was lavaged. During lavage it was permissible to insufflate the stomach to test for leakage of the closed defect. No method was specified for closing the abdomen. Laparoscopic repair was performed with the patient and the team set up in the \"French\" position. Trocars were placed at the umbilicus (video scope) and on the left and right midclavicular line above the level of the umbilicus (instruments). If necessary a fourth trocar was placed in the subxiphoid space for lavage or retraction of the liver. Surgeons were free to use either 0° or 30° video scopes for the procedure. The rest of the procedure was identical to that described above for open repair. No method was specified for closing the trocar incisions.\n\nBODY.METHODS.POSTOPERATIVE FOLLOW-UP:\nPostoperative pain was scored by means of a visual analog scale (VAS) for pain on days 1, 3, 7, and 28 ranging from 0 (no pain) to 10 (severe pain). In addition, the days during which opiates were used by the patients were registered. All complications, minor and major, were monitored. The treating surgeons determined time of discharge on the basis of physical well-being, tolerance of a normal diet, and ability to use the stairs. For this reason, this was an unblinded trial. Postoperative hospital stay without correction for time spent in hospital as a result of non-medical reasons (inadequate care at home) was calculated. Patients were invited to attend the outpatient clinic at 4 weeks, 6 months, and one year postoperatively. They were asked to complete forms related to pain and use of analgesics.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData analysis was carried out according to the intention-to-treat principle as established in the trial protocol. Data were collected in a database, and statistical analyses were performed with the Statistical Package for Social Sciences for Windows (SPSS 15.0, SPSS Inc., Chicago, IL). A researcher blinded to the nature of the procedures performed all data analyses. The primary outcome of the trial was duration of hospital stay. The power analysis was performed on basis of a reduction in hospital stay by 1.5 days (10–8.5 days from admission) in favor of the laparoscopically treated group using a β of 0.80 and an α of 0.05. This resulted in a trial size of 50 patients per group. The Pearson chi-squared test was used to compare categorical variables, and the Mann-Whitney U-test was used to compare continuous variables as we could not assume normal distribution because of the relatively small numbers. In Tables 1–6 medians and interquartile ranges (IQR) are reported. All data were analyzed according to the intention-to-treat principle; i.e., patients remained in their assigned group even if during the procedure the surgeon judged the patient not to be suitable for the technique assigned or if conversion was required. Null hypotheses were tested two-sided and a P value of 0.05 or less was considered statistical significant.Table 1Baseline parametersLaparoscopic repairOpen repairP valuen = 52 n = 49 Male:female ratio1.3:11.9:1Median age (years) + IQR 66 (25.8)59 (29.5)0.185Median BMI (kg/m2) + IQR23 (4)22 (5)0.118Median duration of symptoms (h) + IQR11 (17)11 (19)0.948Median blood pressure systolic (mmHg) + IQR125 (38.5)130 (36.5)0.457Median blood pressure diastolic (mmHg) + IQR75 (25.5)75 (24.5)0.596Median heart rate (beats/min) + IQR 88 (34.0)92 (21)0.403Median body temperature (°C) + IQR 36.9 (0.92)36.8 (1.5)0.658Mannheim Peritonitis Index + IQR19.5 (8.25)16 (14)0.386Median white cell count (×109/l) + IQR 12.1 (8.9)12.1 (7.75)0.467Median ASA score + IQR 1.0 (1.0)1.5 (1.0)0.902IQR interquartile range, difference between 25th percentile and 75th percentile; BMI body mass indexASA American Society of Anesthesiologists Association score\n\nBODY.RESULTS.PATIENTS:\nA total of 109 patients were included in the trial based on a high suspicion of PPU (Fig. 1). Eight patients were excluded during operation because no gastric perforation was detected or a defect in other parts of the digestive tract was uncovered. Data for these patients were not collected and the patients were excluded from further analysis. The remaining 101 patients made up the study population; their baseline parameters are given in Table 1. Fifty-two patients were randomized for laparoscopic repair and 49 for open repair of the perforation. Forty patients were female. The mean age of the patients was 61 years. The BMI (body mass index) was equally distributed between the groups, with a median of 22.5. Patients in both groups had been suffering from symptoms for a mean duration of 11 h, and those in the laparoscopy group presented with a median Mannheim Peritonitis index [14] of 19.5, whereas those in the open group had a median Mannheim Peritonitis index of 16.Fig. 1Patient flow chart Thirty patients reported the use of non-steroidal anti-inflammatory drugs (NSAIDs; 17 laparoscopic versus 13 open), and 10 patients used proton pump inhibitors (6 laparoscopic versus 4 open). Patient history revealed gastric ulcer disease in 19 patients.\n\nBODY.RESULTS.INTRAOPERATIVE FINDINGS:\nThe discovered ulcer perforations were found to have a mean diameter of 10 mm, which did not differ between groups (Table 2). Location of the perforated ulcers was distributed equally between groups. Defects were located in the prepyloric region (n = 41), the postpyloric region (n = 34), and at the pylorus (n = 20). The median volume of lavage fluid used was 1,000 ml (range: 100–5,000 ml). The surgeon decided the amount of lavage used. There was no consensus on how much was necessary. Median blood loss did not differ between groups. Skin-to-skin time differed by 25 min, favoring open repair of PPU (Table 2).Table 2Intraoperative findingsLaparoscopic repairOpen repairP valuen = 52n = 49Median size of perforation (mm) + IQR10.0 (7.0)7.0 (6.0)0.379Number of patients with defect Pyloric812 Postpyloric2014 Prepyloric1922 Median volume of lavage (ml) + IQR1,000 (1,500)1,000 (1,425)1.000 Median bloodloss (ml) + IQR10.0 (40.0)10.0 (50.0)0.423 Skin to skin time (min) + IQR75 (47.2)50 (25.5)0.000\n\nBODY.RESULTS.INTRAOPERATIVE COMPLICATIONS:\nConversion to open surgery was required in four patients (8%). Reasons for conversion included the inability to visualize the ulcer defect because of bleeding (n = 1/52; 2%), inability to reach the defect because of perforation in the vicinity of the gastroduodenal ligament and because of a dorsal gastric ulcer (n = 2/52; 4%), and inability to find the perforation (n = 1/52; 2%).\n\nBODY.RESULTS.POSTOPERATIVE COMPLICATIONS:\nComplications were statistically equally distributed between the two groups (Table 3). There were 12 complications in 9 patients in the laparoscopic group and 24 complications in 15 patients in the open group. Mortality was 4% in the laparoscopic group and 8% in the open group. In the laparoscopic group death was caused by sepsis due to leakage at the repair site. In the open group 3 patients died because of pulmonary problems (ARDS, pneumonia), and 1 patient died after complications following a cerebrovascular accident (CVA) combined with respiratory insufficiency.Table 3Postoperative complicationsLaparoscopic repairOpen repairP valuen = 52n = 49Pneumonia21Respiratory insufficiency13ARDS1Cardiac problems22Sepsis31Leakage at repair site2Abscess3Ileus1Fascial dehiscence1Wound infection3Urinary tract infection2Incisional hernia1Cerebrovascular accident1Death24Total complications12240.061Total of patients with complications ≥19 (18%)15 (36%)\n\nBODY.RESULTS.DISCHARGE:\nTime to discharge was similar for the two groups, with a median difference of 1.5 days (Table 4). Nasogastric decompression could be stopped after 2–3 days in both groups (Table 4).Table 4Duration of hospital stay, nasogastric decompressionLaparoscopic repairOpen repairP valuen = 52n = 49Median hospital stay (days) + IQR6.5 (9.3)8.0 (7.3)0.235Median duration of nasogastric decompression (days) + IQR2.0 (3.0)3.0 (1.3)0.334\n\nBODY.RESULTS.PAIN:\nVisual analog pain scores were in favor of laparoscopic repair (Table 5; p < 0.005). Although the median duration of opiate use in the two groups was 1.0, the mean duration in the open group was found to be 0.6 days longer than in the laparoscopic group (Table 6).Table 5Postoperative painMedian VAS pain scoreMedian VAS pain scoreP valueLaparoscopic repairOpen repairDay 1 + IQR3.8 (3.0)5.15 (2.5)0.001Day 3 + IQR2.1 (2.5)3.0 (2.4)0.035Day 7 + IQR1.0 (2.0)1.85 (2.8)0.036Day 28 + IQR0.3 (0.7)0.0 (1.7)0.748Table 6Postoperative opiate usageOpiate requirementOpiate requirementP valueLaparoscopic repairOpen repairMedian duration (days) + IQR1.0 (1.25)1.0 (1.0)0.007Mean duration (days) ± SD1.0 ± 0.91.6 ± 0.90.007\n\nBODY.RESULTS.VAS APPEARANCE OF SCAR:\nThe VAS score for appearance of the scar left by the respective procedures (subjectively recorded in the same way as pain) differed by 2.3 points, favoring the laparoscopic procedure (7.7 vs. 5.4; P = 0.033)\n\nBODY.DISCUSSION:\nThe need for surgery for PPU has declined enormously in Europe and America with reported rates ranging from 50% to 80%, thanks to effective gastric acid-reducing medication [15]. For this reason, as well as because many surgeons prefer upper laparotomy, it took more time than expected to include 100 patients in our study. Reasons given by surgeons who prefer open repair were that it is a more familiar procedure and it can be completed faster than laparoscopy. It was also noted that patients often undergo operation at night, when the surgeon on call was not always laparoscopically trained. Other randomized trials have already shown the feasibility of laparoscopic repair of PPU [3, 4, 6, 10]. Only a few had more than 100 patients, and some emphasized results from subgroups of patients [8, 11, 12]. We did not subdivide our patients and included patients with risk factors for developing sepsis or conversion [10]. In eight of the original 109 patients (7%) it became evident during the operation that the patient had a diagnosis different from PPU. In the patients who were randomized for laparoscopy this discovery revealed the benefit of laparoscopy as a diagnostic procedure indicating either an upper or lower abdominoplasty or continuation of the laparoscopy [16]. Conversion rate in the laparoscopy group was 8% (4/52). This is much lower than that reported in literature, where conversion rates as high as 60% were found [3, 4, 6]. This maybe partially explained by the fact that only trained and experienced laparoscopic surgeons (those performing at least 50 laparoscopic procedures a year) participated in this trial, confirming the belief that this procedure should only be done by experienced surgeons [3–5]. Operating time was significantly longer in the laparoscopy group (75 min versus 50 min), which is comparable to reports in the literature [3, 10]. A possible explanation for the longer operative time is that laparoscopic suturing is more demanding [9, 17], especially if the edges of the perforation are infiltrated and friable. Sutures easily tear out and it is more difficult to take large bites and to tie knots properly. Use of a single-stitch method described by Siu et al. [18], fibrin glue, or a patch might solve this problem [12, 19]. Another reason for the increase in operating time is the irrigation procedure. Irrigating through a 5-mm or even a 10-mm trocar is time consuming, and suction of fluid decreases the volume of gas and reduces the pneumoperitoneum. There is no evidence that irrigation lowers the risk of sepsis [20], so it might only be necessary if there are food remnants in the abdomen; perhaps there is no need for it at all. One of the suspected risks of laparoscopic surgery is that of inducing sepsis by increasing bacterial translocation while establishing a pneumoperitoneum [6]. This danger could not be confirmed in our trial. Furthermore data suggest that there is a decrease in septic abdominal complications when laparosopic surgery is used [8]. Evidence already exists that laparoscopic correction of PPU causes less postoperative pain [6, 12, 17, 18]. The meta-analysis published by Lau [6] showed that eight out of ten studies showed a significant reduction in dosage of analgesics required in the laparoscopic group. Also, the three studies that had included VAS pain scores showed consistently lower pain scores, as was observed in our study as well. Whether this will lead to a better quality of life for patients, especially during the first weeks after surgery still needs to be analyzed. Although patients in our series who underwent laparoscopy had less postoperative pain, there was no difference in the length of hospital stay in our two groups. In fact, hospital stay overall in our patients was very long. This was most likely caused by the fact that many patients, especially the elderly, could not be discharged because of organizational reasons. Of the 101 patients, 41% were 70 years or older (24 in the laparoscopic group versus 17 in the open group). It appears that the age of PPU patients is increasing, and this will eventually represent a significant problem in the future [2, 3]. One benefit of the laparoscopic procedure not often mentioned in literature [6] is cosmetic outcome. Nowadays patients are aware of this benefit, and sometimes this is the reason why they demand laparoscopic surgery. In conclusion, the results of the LAMA trial confirm the results of other trials that laparoscopic correction of PPU is safe, feasible for the experienced laparoscopic surgeon, and causes less postoperative pain. Operating time was longer in the laparoscopic group and there was no difference in length of hospital stay or incidence of postoperative complications.\n\n**Question:** Compared to Conventional open approach what was the result of Laparoscopic surgery on Hospital permanence?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
463
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Artemether-Lumefantrine versus Dihydroartemisinin-Piperaquine for Treatment of Malaria: A Randomized Trial\n\n ABSTRACT.OBJECTIVES:\nTo compare the efficacy and safety of artemether-lumefantrine (AL) and dihydroartemisinin-piperaquine (DP) for treating uncomplicated falciparum malaria in Uganda.\n\nABSTRACT.DESIGN:\nRandomized single-blinded clinical trial.\n\nABSTRACT.SETTING:\nApac, Uganda, an area of very high malaria transmission intensity.\n\nABSTRACT.PARTICIPANTS:\nChildren aged 6 mo to 10 y with uncomplicated falciparum malaria.\n\nABSTRACT.INTERVENTION:\nTreatment of malaria with AL or DP, each following standard 3-d dosing regimens.\n\nABSTRACT.OUTCOME MEASURES:\nRisks of recurrent parasitemia at 28 and 42 d, unadjusted and adjusted by genotyping to distinguish recrudescences and new infections.\n\nABSTRACT.RESULTS:\nOf 421 enrolled participants, 417 (99%) completed follow-up. The unadjusted risk of recurrent falciparum parasitemia was significantly lower for participants treated with DP than for those treated with AL after 28 d (11% versus 29%; risk difference [RD] 18%, 95% confidence interval [CI] 11%–26%) and 42 d (43% versus 53%; RD 9.6%, 95% CI 0%–19%) of follow-up. Similarly, the risk of recurrent parasitemia due to possible recrudescence (adjusted by genotyping) was significantly lower for participants treated with DP than for those treated with AL after 28 d (1.9% versus 8.9%; RD 7.0%, 95% CI 2.5%–12%) and 42 d (6.9% versus 16%; RD 9.5%, 95% CI 2.8%–16%). Patients treated with DP had a lower risk of recurrent parasitemia due to non-falciparum species, development of gametocytemia, and higher mean increase in hemoglobin compared to patients treated with AL. Both drugs were well tolerated; serious adverse events were uncommon and unrelated to study drugs.\n\nABSTRACT.CONCLUSION:\nDP was superior to AL for reducing the risk of recurrent parasitemia and gametocytemia, and provided improved hemoglobin recovery. DP thus appears to be a good alternative to AL as first-line treatment of uncomplicated malaria in Uganda. To maximize the benefit of artemisinin-based combination therapy in Africa, treatment should be integrated with aggressive strategies to reduce malaria transmission intensity.\n\nBODY.INTRODUCTION:\nIn Africa, treatment of uncomplicated malaria is undergoing dramatic changes. In response to widespread resistance of the parasite to commonly used monotherapies, particularly chloroquine (CQ) and sulfadoxine-pyrimethamine (SP), many countries have recently adopted artemisinin-based combination therapy (ACT) as a first-line regimen for the treatment of uncomplicated malaria [1]. In Uganda, artemether-lumefantrine (AL) was chosen to replace the combination of CQ + SP as the first-line regimen for malaria in 2004, with amodiaquine + artesunate (AQ + AS) offered as an alternative regimen. Although data on AL were limited at that time, subsequent studies from Uganda [2,3] and elsewhere in Africa [4–6] showed that AL was highly efficacious and well tolerated for uncomplicated malaria. Indeed, in the two published studies of AL from Uganda the risk of treatment failure due to recrudescence was reported to be less than 2% [2,3]. However, despite the promise of AL, there are substantial limitations to this regimen, including twice-daily dosing and need for administration with fatty food. A concern with all antimalarial regimens, particularly in areas of high transmission, is frequent recurrence of malaria after therapy. This concern was highlighted in a recent study comparing efficacies of the ACTs AL and AQ + AS in Tororo, Uganda, an area with very high malaria transmission [3]. In that study, both regimens were highly efficacious for eradication of infections but risks of reinfection were extremely high, with 50% of AL-treated and 66% of AQ + AS–treated patients developing recurrent parasitemia within 28 d. Dihydroartemisinin-piperaquine (DP) is a fixed-dose ACT that has recently become available in Africa. In studies from Southeast Asia, DP appeared to be well tolerated and highly efficacious against multidrug-resistant falciparum malaria [7–11]. However, the epidemiology of malaria and patterns of antimalarial drug use are quite different in Africa than in Asia [12,13]. In the only published study evaluating DP in Africa, DP was highly efficacious with a good safety and tolerability profile at three sites in Rwanda [14]. To compare the performance of DP with the new first-line therapy in Uganda, we conducted a randomized clinical trial comparing the efficacy and safety of AL and DP for the treatment of uncomplicated falciparum malaria in Apac, an area of extremely high transmission intensity.\n\nBODY.METHODS.STUDY SITE:\nThe study was conducted at Aduku Health Centre, Apac District, Uganda. The district experiences perennial holoendemic malaria. The entomological inoculation rate in Apac, a measure of transmission intensity, was measured at 1,564 infectious bites per person per year [15]. The study protocol was approved by the Makerere University Research and Ethics Committee, the Uganda National Council of Science and Technology, and the University of California San Francisco Committee for Human Research.\n\nBODY.METHODS.PARTICIPANTS:\nConsecutive patients presenting to the health center with symptoms suggestive of malaria and a positive screening thick blood smear were referred to study physicians for further assessment. Patients were enrolled if they fulfilled the following selection criteria: (1) age 6 mo to 10 y; (2) weight ≥ 5 kg; (3) history of fever in the last 24 h or axillary temperature ≥ 37.5 °C; (4) no history of serious side effects to study medications; (5) no evidence of a concomitant febrile illness; (6) provision of informed consent by a parent or guardian; (7) no danger signs or evidence of severe malaria; and (8) Plasmodium falciparum monoinfection with parasite density 2,000–200,000/μl of blood. Because laboratory results were generally not available until the following day, a patient could be excluded after randomization.\n\nBODY.METHODS.PROCEDURES:\nAt enrollment, we asked children's parents or guardians about prior antimalarial therapy, use of other medications, and presence of common symptoms. Axillary temperature and weight were measured, and a physical examination was performed. A brief neurological assessment, consisting of simple clinical tests for fine finger dexterity (ability to pick up a small object), was undertaken. We also obtained blood by fingerprick for thick and thin blood smears, for hemoglobin assessment, and to store on filter paper for molecular analysis. Patients were asked to return for follow-up on days 1, 2, 3, 7, 14, 21, 28, 35, and 42, and any other day that they felt ill. Follow-up evaluation consisted of a standardized history and physical examination, including neurological assessment on all days of follow-up. We obtained blood by fingerprick for thick blood smears and storage on filter paper on all follow-up days except day 1. Hemoglobin measurement was repeated on day 42 or the day of recurrent symptomatic malaria. If patients did not return for follow-up, they were visited at home. Blood smears were stained with 2% Giemsa for 30 min. Parasite densities were determined from thick blood smears by counting the number of asexual parasites per 200 white blood cells (WBCs), or per 500 if the count was less than 10 parasites/200 WBCs, assuming a WBC count of 8,000/μl. A smear was considered negative if no parasites were seen after review of 100 high-power fields. We also assessed gametocytemia from thick blood smears. Thin blood smears were reviewed for non-falciparum infections. A second microscopist, who was unaware of the results of the first reading, re-read all slides. A third microscopist unaware of the first two readings resolved discrepant slides. Hemoglobin measurements were made using a portable spectrophotometer (HemoCue, http://www.hemocue.com).\n\nBODY.METHODS.INTERVENTIONS:\nOn day 0, patients were randomly assigned to receive AL or DP. A nurse administered study medications according to weight-based guidelines for administration of fractions of tablets. We administered all drugs orally as follows: AL (Coartem, Novartis, 20 mg artemether/120 mg lumefantrine tablets), administered according to weight as one (5–14 kg), two (15–24 kg), three (25–34 kg), or four (≥ 35 kg) tablets given twice daily for 3 d; DP (Duocotexin, Holley Pharm, 40 mg dihydroartemisinin/320 mg piperaquine tablets), targeting a total dose of 6.4 and 51.2 mg/kg of dihydroartemisinin and piperaquine, respectively, given in three equally divided daily doses to the nearest quarter tablet. We used a pill cutter to ensure that the tablet fractions were as close to the nearest quarter tablet as possible. Participants in the DP group also received placebo tablets administered in the evening over 3 d to simulate the AL dosing schedule. Study medications were administered with water, and patients were given a glass of milk after each dose of study medication. All treatment was directly observed. Participants were given the option either to wait at the clinic for the evening dose (lunch was provided) or to leave the clinic and return in the evening (transport was provided). After each dose, children were observed for 30 min, and the dose was readministered if vomiting occurred. All patients were provided with a 3 d supply of acetaminophen for treatment of febrile symptoms. Children with hemoglobin of less than 10 g/dl were treated according to Integrated Management of Childhood Illness guidelines with ferrous sulfate for 14 d and antihelminthic treatment if appropriate. Households of all patients were given two long-lasting insecticide-treated bed nets (ITNs) (PermaNet, Vestergaard Frandsen, http://www.vestergaard-frandsen.com) on the day of enrollment, with instructions for one net to be used by the study patient.\n\nBODY.METHODS.OBJECTIVES:\nThe objectives of the study were to compare the efficacy and safety of AL and DP for the treatment of uncomplicated falciparum malaria at a high transmission–intensity site in Uganda.\n\nBODY.METHODS.OUTCOMES: EFFICACY:\nTreatment outcomes were classified according to 2006 World Health Organization (WHO) guidelines as early treatment failure (ETF; danger signs or complicated malaria or failure to adequately respond to therapy days 0–3); late clinical failure (LCF; danger signs or complicated malaria or fever and parasitemia on days 4–42 without previously meeting criteria for ETF or LPF); late parasitological failure (LPF; asymptomatic parasitemia days 7–42 without previously meeting criteria for ETF or LCF); or adequate clinical and parasitological response (absence of parasitemia on day 42 without previously meeting criteria for ETF, LCF, or LPF) [16]. Patients were treated with quinine sulfate (10 mg/kg three times daily for 7 d) on the day that they fulfilled criteria for early treatment failure or late clinical failure. Patients with late parasitological failure were followed, and were given quinine only if they developed fever with parasitemia or remained parasitemic on the last day of follow-up. Patients were excluded from further follow-up after enrollment if any of the following occurred: (1) use of antimalarial drugs outside of the study protocol; (2) withdrawal of consent; (3) loss to follow-up; (4) protocol violation; or (5) death due to a nonmalarial illness. The primary efficacy outcomes were the 28- and 42-d risks of early treatment failure or recurrent parasitemia (LCF or LPF), unadjusted and adjusted by genotyping. Secondary efficacy outcomes included prevalence of fever and parasitemia during the first 3 d of follow-up, change in mean hemoglobin from day 0 to day 42 or day of repeat therapy, and prevalence of gametocytemia (presence of gametocytes on thick smears) during follow-up in participants lacking gametocytes at enrollment. Molecular genotyping techniques were used to distinguish recrudescent from new infections for all patients with LCF or LPF response. Briefly, parasite DNA was isolated from filter paper blood samples collected at enrollment and on the day of recurrent parasitemia using chelex extraction. Paired samples were genotyped in a stepwise fashion using merozoite surface protein (msp)-2, msp-1, and four microsatellites [17]. If, for any of the six loci, an allele was not shared between day 0 and day of recurrence, the infection was classified as a new infection. If at least one allele was shared between day 0 and day of recurrence at all six loci, the infection was classified as a possible recrudescence. The term \"possible recrudescence\" was used because the complexity of infection (number of infecting parasite strains) was very high in our setting, making it difficult to distinguish definitively a true recrudescence from a new infection.\n\nBODY.METHODS.OUTCOMES: SAFETY:\nSecondary safety outcomes included risks of serious adverse events and common adverse events of any severity. An adverse event was defined as any untoward medical occurrence, irrespective of its suspected relationship to the study medications [18]. At each follow-up visit, patients were assessed for any new or worsening event. All events were graded by severity (none, mild, moderate, severe, life-threatening) and relationship to study treatment (none, unlikely, possible, probable, or definite) using guidelines from the World Health Organization (Toxicity Grading Scale for Determining the Severity of Adverse Events) [19]and the United States National Institutes of Health, Division of Microbiology and Infectious Diseases (Pediatric Toxicity Tables, May 2001) [20]. A serious adverse event was defined as any adverse experience that resulted in death, life-threatening experience, inpatient hospitalization, persistent or significant disability or incapacity, or specific medical or surgical intervention to prevent serious outcome.\n\nBODY.METHODS.SAMPLE SIZE:\nWe calculated sample size to test the hypothesis that the risk of recurrent parasitemia after 42 d would differ between the two treatment groups. Based on previous data, the risk of recurrent parasitemia (unadjusted by genotyping) after 42 d was estimated to be 50% after treatment with AL [3]. Using this estimate, we calculated that 200 patients (allowing for 10% loss to follow-up) would need to be enrolled in each treatment arm to detect a 15% risk difference between the treatment groups with a two-sided type I error of 0.05 and power of 80%.\n\nBODY.METHODS.RANDOMIZATION: SEQUENCE GENERATION, ALLOCATION CONCEALMENT, IMPLEMENTATION:\nA randomization list was computer generated by an off-site investigator without the use of blocking or stratification. Sequentially numbered, sealed envelopes containing the treatment group assignments were prepared from the randomization list. The study doctors assigned treatment numbers sequentially and the study nurse allocated treatment by opening the envelope corresponding to the treatment number. The randomization codes were secured in a locked cabinet accessible only by the study nurse. Participants were enrolled by the study physicians, and treatments were assigned and administered by the study nurse.\n\nBODY.METHODS.BLINDING:\nOnly the study nurse was aware of treatment assignments. All other study personnel, including the study physicians and laboratory personnel involved in assessing outcomes, were blinded to the treatment assignments. Patients were not informed of their treatment regimen, but the color of the two study medications was not the same (DP and placebo tablets were light blue; AL tablets were light yellow).\n\nBODY.METHODS.STATISTICAL METHODS:\nData were entered and verified using Epi Info version 6.04 and analyzed using STATA version 8.0 (STATA, http://www.stata.com). Efficacy and safety data were evaluated using a modified intention-to-treat analysis which included all patients who fulfilled enrollment criteria. Patients who were randomized to therapy but not enrolled in the study due to laboratory results available on day 1 were not included in the analysis. Risks of recurrent parasitemia at 28 and 42 d of follow-up (adjusted and unadjusted by genotyping) were estimated using the Kaplan-Meier product limit formula. Data were censored for patients who did not complete follow-up and for new infections when estimating outcomes adjusted by genotyping. Patients with LCF or LPF due to non-falciparum species were censored as non-failures at the time they were classified as LCF or LPF. The Z-test was used to compare the Kaplan-Meier estimates of treatment efficacy at fixed points in time between the treatment groups. Confidence intervals around the difference between Kaplan-Meier estimates were calculated using normal approximation and Greenwood's estimates of standard errors. Categorical variables were compared using Chi-squared or Fisher exact test and continuous variables were compared using the independent samples t-test. All reported p-values are two sided without adjustment for multiple testing and were considered statistically significant if below 0.05.\n\nBODY.RESULTS.PARTICIPANT FLOW:\nOf 572 patients screened, 509 were randomized to treatment, and 421 were enrolled in the study (Figure 1). Primary efficacy outcomes, unadjusted and adjusted by genotyping, were available for 417 (99%) and 416 (99%) enrolled participants, respectively. One patient with an unsuccessful genotyping result in the AL group was not included in the analysis when adjusting for genotyping. Figure 1Trial Profile\n\nBODY.RESULTS.RECRUITMENT:\nThe study was conducted between March and July 2006.\n\nBODY.RESULTS.BASELINE DATA:\nAmong patients enrolled in the study, there was no difference at baseline of gender, age, temperature, parasite density, hemoglobin, or recent antimalarial use between the two treatment groups (Table 1). Among patients treated with AL, the mean total doses (standard deviation [SD]) were 13.1 (2.8) mg/kg for artemether and 78.8 (17.1) mg/kg for lumefantrine. Among patients treated with DP, the mean total doses (SD) were 7.3 (1.0) mg/kg for dihydroartemisinin and 58.4 (8.0) mg/kg for piperaquine. Table 1 Baseline Characteristics of Participants According to Treatment Group \n\nBODY.RESULTS.OUTCOMES AND ESTIMATION.PRIMARY EFFICACY OUTCOMES.:\nThere were no early treatment failures in the first 3 d following initiation of therapy. Episodes of recurrent parasitemia were first detected 14 d following therapy in the AL arm and 21 d following therapy in the DP arm (Figure 2; Table 2). The risk of recurrent falciparum parasitemia unadjusted by genotyping was significantly lower for participants treated with DP than for those treated with AL after 28 d of follow-up (11% versus 29%; risk difference [RD] = 18%, 95% confidence interval [CI] 11%–26%) and after 42 d of follow up (43% versus 53%; RD = 9.6%, 95% CI 0%–19%) (Table 3). Similar trends were seen when results were adjusted by genotyping. The risk of recurrent parasitemia due to possible recrudescence was significantly lower for participants treated with DP than for those treated with AL after 28 d of follow-up (1.9% versus 8.9%; RD = 7.0%, 95% CI 2.5%–12%) and after 42 d of follow up (6.9% versus 16%; RD = 9.5%, 95% CI 2.8%–16%) (Table 3). Among patients with recurrent parasitemia, the median time to recurrent parasitemia was significantly shorter in patients treated with AL compared to patients treated with DP (28 d versus 35 d, p < 0.0001). Additionally, patients treated with AL had a higher risk of recurrent parasitemia due to non-falciparum species (all were either P. malariae or P. ovale) compared to patients treated with DP (5.2% versus 0.9%, p = 0.01). Results were similar when restricting the analyses to children under the age of 5 y, as 94% of patients enrolled were in this age range. Figure 2Cumulative Risk of Recurrent Parasitemia Stratified by Treatment Group Table 2 WHO Treatment Outcomes after 42 Days of Follow-Up Table 3 Estimates of Comparative Efficacy Patients with asymptomatic recurrent parasitemia (LPF) were not treated unless they developed symptomatic malaria or reached the end of the 42-d follow-up period. Among 121 patients with LPF occurring before day 42, only six (5%) spontaneously cleared their parasites without treatment, 86 (53/75, 71% in AL group and 33/46, 72% in DP group) went on to develop symptomatic malaria by day 42, 28 (23%) had persistent asymptomatic parasitemia at day 42, and one (1%) took other antimalarials prior to day 42. Overall, 81 (39%) of 210 patients treated with AL went on to develop recurrent symptomatic malaria, compared to 52 (25%) of 211 patients treated with DP (p = 0.002).\n\nBODY.RESULTS.OUTCOMES AND ESTIMATION.SECONDARY EFFICACY OUTCOMES.:\nThe prevalence of fever (either subjective or documented) was similar over the first 3 d of follow-up in the two treatment groups. Both treatments produced rapid clearance of parasitemia with no parasites detected by day 3 (Table 4). The appearance of gametocytes not present at enrollment was significantly lower over the last 4 wk of follow-up in DP group (Table 4). Patients treated with DP had a higher mean increase in hemoglobin levels, which was of borderline statistical significance (1.9 versus 1.5 g/dl, p = 0.05). However, among patients with recurrent parasitemia there was no difference in the prevalence of anemia (Hb <10 g/dl) on the day of failure in the AL group (33/117, 28%) compared to the DP group (25/92, 27%) (p = 0.87). Table 4 Secondary Outcomes \n\nBODY.RESULTS.ADVERSE EVENTS:\nBoth drugs were well tolerated. Most adverse events were of mild or moderate severity and consistent with symptoms due to malaria. Overall, there was no difference in the proportion of study participants who experienced any adverse event of moderate or greater severity between the DP (46%) and AL (42%) treatment groups (p = 0.47). There was also no difference in the proportion of patients who experienced common adverse events of any severity (Table 4). Serious adverse events occurred in six participants and included three febrile convulsions, one case of acute otitis media, one acute asthma attack, and one case of pyomyositis. All serious adverse events were judged to be unrelated to study medications. No patients were withdrawn from the trial for drug-induced vomiting that would have required alternative treatment.\n\nBODY.DISCUSSION.INTERPRETATION:\nIn this randomized clinical trial, AL and DP were both highly efficacious at initial clearance of parasitemia and well tolerated for treatment of uncomplicated malaria in Apac, Uganda, an area with extremely high malaria transmission intensity. Importantly, DP-treated patients had a significantly lower risk of recurrent parasitemia in both falciparum and non-falciparum infections. Accurate distinction between recrudescent and new falciparum infections following therapy was challenging due to the complexity of infection in this high transmission setting, despite the use of 6-locus genotyping. However, DP clearly offered better post-treatment prophylactic effect following therapy compared to AL and our data suggest a reduced risk of treatment failure due to recrudescent parasites. DP also offered other benefits, including a lower risk of gametocytemia after therapy and better hemoglobin recovery. The significantly lower risk of recurrent parasitemia after treatment with DP compared to AL is likely explained by differences in the terminal elimination half-lives of the two partner drugs. Piperaquine, a bisquinoline, is estimated to have an elimination half-life of 2–3 wk [21] compared to lumefantrine, a quinoline, which has an estimated elimination half-life of 4–10 d [22]. Selecting the ideal partner drug to combine with artemisinins in ACT regimens remains a challenge. Extended elimination half-lives may provide better post-treatment prophylaxis, but may also increase the risk for the selection of drug-resistant parasites, especially in areas of intense malaria transmission [23]. Resistance may develop to partner drugs during the elimination phase of the drug (due to their longer half-life), when newly infecting parasites are exposed to subtherapeutic levels of the drug. As ACT use becomes widespread in areas with high levels of malaria transmission, it will be important to monitor closely for the selection of parasites that are resistant to artemisinin partner drugs, since the benefits of a regimen that offers decreased recurrent infection must be balanced with the consequences of increased selection of resistant parasites.\n\nBODY.DISCUSSION.GENERALIZABILITY:\nOur study was conducted in an area of very high malaria transmission and highlights the importance of level of transmission in determining the overall efficacy of an antimalarial treatment regimen [24]. In a high-transmission setting, differences in the post-treatment prophylactic effect of ACTs may have a significant impact on the timing and frequency of recurrent episodes of malaria, as seen in this study. In areas with lower transmission intensity, a drug's post-treatment prophylactic effect would be expected to be of lesser importance. This is illustrated in a study from three sites in Rwanda with differing transmission intensity [14]. At a periurban site with relatively low transmission intensity, the risk of recurrent parasitemia after 28 d was equally low in patients treated with DP (4%) and patients treated with AQ + AS (7%). In contrast, at two rural sites with high transmission intensity, the risk of recurrent parasitemia was significantly lower with DP (12%) compared to AQ + AS (23%). One limitation of this study was the difficulty of accurately distinguishing recrudescence from new infections among patients with recurrent parasitemia, due to the high complexity of infection. Among episodes classified as \"possible recrudescence,\" the mean complexity of infection was over five clones on day 0 and approximately four clones on the day of recurrent parasitemia. Even using six-locus genotyping, the probability of a new infection being misclassified as a recrudescence may be relatively high. Thus, the reported 42-d risks of recurrent parasitemia adjusted by genotyping of 16% in the AL treatment arm and 6.9% in the DP treatment arm likely overestimate the true risks of recrudescence. In this study we followed the new WHO outcome classification system where a patient is classified as a failure after the first reappearance of parasitemia, regardless of whether the patient is symptomatic [16]. Previously, patients with asymptomatic parasitemia following therapy were distinguished from those with recurrent symptomatic malaria based on the assumption that asymptomatic parasitemia may not be clinically important in areas of high transmission intensity [25]. However, in this study, where semi-immunity is expected to be high, the vast majority of patients with asymptomatic parasitemia following therapy went on to develop symptomatic malaria, strongly supporting the new WHO protocol recommendations and suggesting the weakness of natural immunity in clearing these parasites.\n\nBODY.DISCUSSION.OVERALL EVIDENCE:\nIn this study, DP was shown to offer benefits over AL, including lower risks of recurrent parasitemia and gametocytemia following therapy and improved hemoglobin recovery. Both DP and AL are fixed-dose coformulated ACTs. However, DP has a simpler, once-daily dosing schedule compared to AL, which is provided twice daily, ideally with a fatty meal. Our results could have important policy implications. Currently AL is the recommended first-line therapy for treatment of uncomplicated malaria in Uganda, with AQ + AS recommended as an alternative if AL is not available. DP is now registered for use in Uganda, and appears to offer an additional highly efficacious ACT for our limited antimalarial armamentarium. In studies from Southeast Asia, DP appears to be well tolerated and highly efficacious against multidrug-resistant falciparum malaria [7–11]. In the only published study evaluating DP in Africa, DP had a cure rate of over 95% and had a significantly lower risk of recurrent parasitemia and adverse events compared to AQ + AS and AQ + SP in Rwanda [14]. Based on available data, DP warrants serious consideration as a first-line therapy for uncomplicated malaria in Africa. DP may also have a role for presumptive treatment of fever through the program for home-based management of fever (HBMF). Currently prepackaged CQ + SP (Homapak) is being distributed in the HBMF program. However, the CQ + SP combination is no longer efficacious [24], and there are plans to incorporate ACTs in the future. Use of DP for HBMF might be attractive because of its simple dosing schedule. There are also potential advantages of having more than one first-line therapy, one for facility-based management of malaria and another for HBMF, in reducing the selective pressure of using one ACT. Despite the excellent initial parasite clearance of both ACT regimens in this study and the provision of ITNs at enrollment, approximately half of all participants experienced recurrent parasitemia within 42 d. This finding emphasizes the need for more aggressive approaches to malaria control in areas with very high malaria transmission. A study done in an area of South Africa with lower-intensity transmission found that the combination of vector control measures including indoor residual spraying and provision of AL dramatically decreased the malaria burden [26]. In order to reduce new malaria infections in our study population, we anticipate that combining several malaria control measures, including treatment with ACTs, provision of ITNs (with education about their use), and potential use of indoor residual spraying, as in South Africa, will likely decrease the malaria burden and reduce drug pressure due to repeated use of ACTs. Monitoring of the impact of these combined control measures will be critical to assess our success in malaria control in Uganda.\n\nBODY.SUPPORTING INFORMATION:\nCONSORT ChecklistAL versus DP Trial, Apac Uganda(49 KB DOC)Click here for additional data file. Trial ProtocolComparison of AL and DP for Treatment of Uncomplicated Malaria in Uganda: Evaluation of Efficacy, Safety, and Tolerability at Three Sites with Varying Transmission Intensity(1.1 MB DOC)Click here for additional data file.\n\n**Question:** Compared to artemether-lumefantrine what was the result of dihydroartemisinin-piperaquine on Adverse Events?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
238
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A prospective, contralateral comparison of photorefractive keratectomy (PRK) versus thin-flap LASIK: assessment of visual function\n\n ABSTRACT.PURPOSE:\nTo compare differences in visual acuity, contrast sensitivity, complications, and higher-order ocular aberrations (HOAs) in eyes with stable myopia undergoing either photo-refractive keratectomy (PRK) or thin-flap laser in situ keratomileusis (LASIK) (intended flap thickness of 90 μm) using the VISX Star S4 CustomVue excimer laser and the IntraLase FS60 femtosecond laser at 1, 3, and 6 months postoperatively.\n\nABSTRACT.METHODS:\nIn this prospective, masked, and randomized pilot study, refractive surgery was performed contralaterally on 52 eyes: 26 with PRK and 26 with thin-flap LASIK. Primary outcome measures were uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), contrast sensitivity, and complications.\n\nABSTRACT.RESULTS:\nAt 6 months, mean values for UDVA (logMAR) were −0.043 ± 0.668 and −0.061 ± 0.099 in the PRK and thin-flap LASIK groups, respectively (n = 25, P = 0.466). UDVA of 20/20 or better was achieved in 96% of eyes undergoing PRK and 92% of eyes undergoing thin-flap LASIK, whereas 20/15 vision or better was achieved in 73% of eyes undergoing PRK and 72% of eyes undergoing thin-flap LASIK (P > 0.600). Significant differences were not found between treatment groups in contrast sensitivity (P ≥ 0.156) or CDVA (P = 0.800) at postoperative 6 months. Types of complications differed between groups, notably 35% of eyes in the thin-flap LASIK group experiencing complications, including microstriae and 2 flap tears.\n\nABSTRACT.CONCLUSION:\nUnder well-controlled surgical conditions, PRK and thin-flap LASIK refractive surgeries achieve similar results in visual acuity, contrast sensitivity, and induction of HOAs, with differences in experienced complications.\n\nBODY.INTRODUCTION:\nRefractive surgery is one of the most commonly performed elective procedures and will likely maintain its popularity as ablation techniques become more refined and understanding of corneal wound healing improves. Two of the most common methods of refractive surgery are photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK). The rapid improvement in vision and lack of postoperative pain associated with LASIK has made this the preferred option with patients compared with PRK, which has greater postoperative discomfort and prolonged recovery of visual acuity.1 Recently, there has been renewed interest in PRK because of increasing concerns of complications associated with LASIK flap creation, including dry eye, corneal ectasia, and flap tears.2–5 Thin-flap LASIK attempts to gain benefits of both techniques by creating a flap of between 80 and 90 μm.6–8 Use of a thinner flap results in a more biomechanically stable cornea and decreases incidence of ectasia given the thicker residual stroma.3,9 Cutting a thinner LASIK flap is less invasive to the nerves within the corneal stroma, decreasing the severity and duration of dry eye, possibly by preserving corneal sensation and blinking rate.10–14 Flap creation avoids corneal epithelium removal, allowing reduced healing time and less haze and scarring.15 The present contralateral study compares the outcomes of eyes that have undergone PRK or thin-flap LASIK using the VISX STAR S4 excimer laser (VISX Incorporated, Santa Clara, CA), with flaps created with intended thicknesses of 90 μm using the IntraLase FS60 femtosecond laser (Abbott Medical Optics [AMO], Santa Ana, CA).\n\nBODY.METHODS:\nData from myopic eyes were analyzed, with or without astigmatism, in which the dominant eye was randomized (Research Randomizer software – Urbaniak, www.randomizer.org) to PRK or thin-flap LASIK (90 μm flap) and the nondominant eye underwent the alternative treatment. All PRK and thin-flap LASIK treatments were performed using the VISX Star S4 CustomVue laser at the John A. Moran Eye Center, Salt Lake City, Utah, between February 2008 and July 2009. All surgeries were overseen by two surgeons (M.M., M.D.M.). The research protocol was approved by the University of Utah Hospital Institutional Review Board. All patients included in this study met the US Food and Drug Administration guidelines for VISX CustomVue LASIK. Mean age of patient, 13 men and 13 women, was 30.8 years (range: 23–46). Twenty-six patients (52 eyes) with stable myopia (1.5–8.5 diopters [D]) and astigmatism (0.242–3.11 D) were enrolled in the study. Eleven patients excluded from this study had clinically significant lens opacities, previous corneal or intraocular surgery, keratoconus, unstable refraction, autoimmune disease, immunosuppressive therapy, or were pregnant or breastfeeding. Correction was made for distance and patients desiring monovision correction were excluded. Contact lenses were discontinued 2 weeks prior to screening for soft contact lens wearers and 6 weeks prior to screening for rigid gas permeable lens wearers. All patients had a preoperative examination including assessment of uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), tonometry, slitlamp examination of the anterior segment, and dilated fundus examination. Manifest and cycloplegic refractions were repeated on 2 separate visits to ensure reliability and stability. Corneal topography and thickness were measured using the Orbscan II v.3.0 (Bausch and Lomb, Rochester, NY). All eyes received 5 preoperative wavefront analyses with the VISX CustomVue WaveScan aberrometer v.3.62 (Fourier) (AMO), without pharmacologic intervention, under mesopic conditions, with a minimum pupil diameter of 6.0 mm. The contralateral study design was made so that each eye could act as a control for the fellow eye in each patient, allowing for study groups to be well matched. There were no violations in the randomization; all patients were analyzed as originally assigned. The randomization protocol was generated before the trial and known only to the study coordinator. In all patients, the emmetropic correction target was based on manifest refraction and wavefront analysis. All flaps were created with the IntraLase FS60 femtosecond laser at 60 kHz in a raster pattern with bed energy of 1.15 μJ, side-cut energy of 2.00 μJ, and pocket enabled. The flaps were created with an intended thickness of 90 μm, diameter of 8.4 to 9.0 mm, superior hinge angle of 55°, and a side-cut angle of 70°. Intraoperative pachymetry or optical coherence tomography were not performed to evaluate actual flap thicknesses. If the 8.0 mm maximum intended ablation diameter exceeded the flap diameter, the hinge and flap were shielded during ablation. Postoperatively, each eye undergoing thin-flap LASIK received 1 drop of gatifloxacin 0.3% (Zymar; Allergan Inc, Irvine, CA), prednisolone acetate 1% (Pred Forte, Allergan Inc), ketorolac tromethamine 0.4% (Acular LS, Allergan Inc.), and a bandage soft contact lens (Softlens Plano T, Bausch and Lomb, Rochester, NY). The prednisolone acetate was continued hourly during the first preoperative day and 4 times daily for an additional 6 days. The gatifloxacin was continued 4 times daily for 1 week. In eyes undergoing PRK all eyes had their corneas cooled with 15 mL of BSS (2.8–3.9°C) immediately following ablation. This was followed by 1 drop of a gatifloxacin 0.3% (Zymar), prednisolone acetate 1% (Pred Forte), ketorolac tromethamine 0.4% (Acular LS) and a bandage soft contact lens (Softlens Plano T). Ketorolac tromethamine was continued 4 times a day for 3 days and then discontinued. Gatifloxacin and prednisolone acetate were continued 4 times a day for 1 week with a subsequent steroid taper over 2 to 3 months per surgeon preference. Mitomycin C was not administered to any patient in the study at any time. Both bandage soft contact lenses were removed simultaneously once re-epithelialization was complete, typically on postoperative days 3 to 5. Patients were seen 1 day, 1 week, 1 month ± 10 days, 3 months ±14 days, and 6 months ±14 days. At all follow-up examinations, UDVA and CDVA were tested using a standard Snellen eye chart. Visual acuity was recorded in both Snellen notation and logarithm of the minimum angle of resolution (logMAR) format. Contrast sensitivity was measured in controlled mesopic conditions at 3, 6, 12, and 18 cycles per degree (cpd) using the Vectorvision CSV-1000E chart (Vectorvision, Greenville, OH). Higher-order aberrations (HOAs), including coma Z(3,1), trefoil Z(3,3), and spherical aberration Z(4,0), were measured using the CustomVue WaveScan at a mean diameter of 6 mm. Undilated scans of both eyes were taken preoperatively and 1, 3, and 6 months postoperatively. Primary outcome measures were UDVA, CDVA, contrast sensitivity, and complications. HOAs were measured and trended within groups as secondary measures. After the study was completed, the results were compiled and the data unmasked for statistical analysis. Refractive error, visual acuity, and HOAs were treated as continuous variables and analyzed for significance by independent t-tests. In all tests, P values <0.05 were considered statistically significant. Data analysis was done using Microsoft Excel (Microsoft Corp, Redmond, WA).\n\nBODY.RESULTS:\nMean preoperative measurements of UDVA, CDVA, sphere, and cylinder are shown in Table 1. 25 of 26 patients (50 eyes) completed the study at postoperative 6 months. One eye in the thin-flap LASIK group required PRK retreatment following a flap tear and both eyes from this patient were therefore removed from analysis of visual acuity, contrast sensitivity, and HOAs as the retreatment prevented the ability to distinguish results between the 2 surgical methods. The eyes from this patient were still included in the analysis of complications.\n\nBODY.RESULTS.VISUAL ACUITY:\nTable 2 shows visual acuity outcomes at 1, 3, and 6 months postoperatively. Statistically significant differences were found between PRK and thin-flap LASIK in UDVA at 1 month postoperatively, with thin-flap LASIK eyes showing more improvement in UDVA. Visual acuities were not statistically different between the groups at 3 or 6 months.\n\nBODY.RESULTS.STABILITY, EFFICACY, AND PREDICTABILITY:\nTable 3 shows stability, efficacy, and predictability outcomes postoperatively at 1, 3, and 6 months. CDVA was statistically different between groups at 1 month, with 24% of the PRK group losing a line or more from preoperative values, while 9% of eyes in the thin-flap LASIK group lost only 1 line at 1 month. No eyes in the thin-flap LASIK group lost more than 1 line. Also, 39% of eyes in the thin-flap group gained a line by 1 month compared with only 12% of eyes in the PRK group. At 6 months 64% and 56% of eyes had gained a line or more of CDVA in the PRK and thin-flap LASIK groups, respectively (P = 0.462).\n\nBODY.RESULTS.CONTRAST SENSITIVITY:\nContrast sensitivity measurements at 3, 6, 12, and 18 cycles per degree (cpd) in each group are shown in Figure 1. There were no differences between groups at any cpd at any time in the study (P ≥ 0.156). The thin-flap LASIK group showed no change in contrast sensitivity postoperatively (P > 0.131), while patients in the PRK group had a slight decrease in contrast sensitivity at 1 month seen at 3 and 12 cpd (P = 0.004) and (P = 0.025), respectively. At 6 months contrast sensitivity in the PRK group was still significantly decreased from baseline at 3 cpd (P = 0.013), although it did not reach a statistically significant difference at 3 months (P = 0.101).\n\nBODY.RESULTS.COMPLICATIONS:\nTypes of complications differed between the 2 groups. In the PRK group, 2 cases of epithelial defects occurred by 1 week, but had completely resolved by 6 months. Three eyes in the PRK group had mild haze appearing as early as 1 week postoperatively. Haze remained in only 1 eye at 6 months, but was classified as minimal and had no effect on UDVA or CDVA. Nine eyes (35%) in the thin-flap LASIK group experienced complications. In the thin-flap LASIK group, flap debris (1 eye), diffuse lamellar keratitis (DLK, 1 eye), and an epithelial cyst at the edge of 1 flap were observed, with no loss of UDVA or CDVA, and all resolved by 6 months. Microstriae were observed in 6 eyes, one of which was the eye described above with flap debris and the other was the eye with DLK, with no associated loss of UDVA or CDVA, with epithelial proliferation noted as filling the microstria and making them less apparent. Two eyes in the thin-flap LASIK group experienced flap tears intraoperatively – one resulting in mild flap edge scarring by 6 months that had no significant effect on visual function, and the other case affecting vision at 1 month postoperatively which was retreated with PRK at 3 months. As a result of the retreatment with the counter surgical technique, the ability to accurately compare visual acuity, contrast sensitivity, and HOAs between the 2 surgical methods was limited and both eyes from this patient were removed from analysis of these measures, but were still included in the analysis of complications.\n\nBODY.RESULTS.HIGHER-ORDER ABERRATIONS:\nAt postoperative 1, 3, and 6 months, 24 (96%), 25 (100%), and 24 (96%) eyes, respectively, in each group completed CustomVue WaveScan analysis. Total root-mean square (RMS) HOAs, coma, trefoil, and spherical aberrations are compared in Figure 2. There were no significant differences between groups in any HOAs throughout the study (P ≥ 0.101), with all P values at 6 months ≥0.63. In both groups, total HOAs (P < 0.008), spherical (P < 0.002), and coma (P = 0.008 at 3 months; P = 0.024 at 6 months) aberrations were significantly increased compared with preoperative conditions. Trefoil showed no significant change throughout the study in either group (P = 0.298).\n\nBODY.DISCUSSION/CONCLUSION:\nThe present study confirms that PRK and thin-flap LASIK are effective surgeries for the correction of low to moderate myopia. Although thin-flap LASIK showed superior visual results in the early postoperative period there was no statistically significant difference in outcomes of UDVA, CDVA, contrast sensitivity, or total RMS HOAs between PRK and thin-flap LASIK by 6 months. In a similar study comparing PRK and thin-flap LASIK, Slade et al also found that UDVA results were better in the thin-flap group early on and equalized by 6 months.16 Our study showed a similar trend, with no significant differences in any of the primary outcomes at 6 months, and with no difference in UDVA at 3 months. Visual regression in our study was similar to outcomes in Slade's study in which 42% of the PRK group lost a line or more of CDVA and 22% of the thin-flap LASIK group lost 1 line at 1 month postoperatively. Despite the use of custom ablation, postoperative increases in total HOAs, sphere, and coma were noted in our study, as also seen by Slade et al, although they noted that the increase in sphere and coma aberrations was significantly higher in the PRK group at 1 and 3 months postoperatively. As found in previous studies, there was no significant change found in trefoil at any time postoperatively.17,18 Our study showed no difference in induction of HOAs between groups at any time. Although increases in HOAs after refractive surgery have been correlated with decreases in contrast sensitivity in other studies, we demonstrate that increases in total RMS, sphere, and coma were seen postoperatively in both groups without a reliable decrease in contrast sensitivity.19,20 Slade's group found that contrast sensitivity was better in the thin-flap group at all postoperative points in the study, which may have been related to their finding of lower induction of sphere and coma aberrations in the thin-flap group compared with the PRK group. The authors recognize that the Slade study had a larger population size (n = 50 per group) and would have increased power to detect significant differences. Our study would have had increased power of analysis with a similar study group size, but results from analysis of HOAs would not likely change as P values for all HOAs at 6 months were ≥0.63. It would be difficult to make any such correlation between contrast sensitivity and HOAs from the results of this study. A loss of CDVA has been associated with the development of corneal haze in other studies, but as mentioned above none of the patients with visual regression developed haze.21–23 Findings in other studies showing that the biomechanics of eyes that have received thin-flap LASIK treatment are indistinguishable from those of PRK have led to suggestions that thin-flap LASIK is the best approach to LASIK.16 Although the present study did not find any statistically significant differences between thin-flap LASIK and PRK in terms of visual quality at 6 months, complications dealing with flap integrity in the thin-flap LASIK group were present which are not complications found in PRK. Although PRK remains a viable option for those unable to undergo LASIK, the use of thinner flaps may eliminate some of the complications seen with traditional LASIK. Larger studies are needed to better compare the complication rates of both methods and to determine how effective thin-flap LASIK will be in achieving the benefits of PRK and LASIK while avoiding the risks associated with each method. While thinner LASIK flaps attempt to preserve the biomechanical stability of the corneal stroma, at the same time, the flap itself becomes less stable, as was noted with the 2 flap tears and other complications occurring in the thin-flap LASIK group in this study. A study by Espandar and Meyer24 showed that most complications in flaps created by IntraLase femtosecond laser occurred at the hinge, which is where the 2 flap tears that occurred in this study. A thinner flap hinge would be biomechanically less stable and would increase the likelihood of intraoperative flap tear occurrence as well. Six of the 9 eyes with complications in the thin-flap LASIK group had microstriae, which are caused by the flattening of a weak corneal flap unable to maintain its curvature over the small area of stroma removed during ablation. The biomechanics of the flap and hinge, however, cannot be evaluated by the design of this study as analysis was done based on intended flap thickness, which has been shown to vary with the IntraLase FS60 femtosecond laser.25 The study could have been strengthened had intraoperative pachymetry or OCT been performed. Creating a flap with increased integrity would help prevent microstriae from forming and would also provide for a stronger hinge that would be less susceptible to flap tear. Possible ways to optimize flap integrity include modification of hinge and side-cut angle creation, as well as improved planarity and microdisruption of flap edges. This will allow improved adhesion of the flap to the underlying stroma. Continued improvements in laser technology may allow for safer creation of thinner flaps, helping to provide evidence for superior outcomes in thin-flap LASIK, permitting the biomechanical stability of PRK with the visual recovery of LASIK. Custom flap formation that minimizes weak areas susceptible to tearing will be helpful in achieving this difficult balance between corneal and flap integrity.\n\n**Question:** Compared to Pre-surgery baseline what was the result of Photo-refractive keratectomy (PRK) on Contrast sensitivity at 3 months seen at 3 cpd?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
265
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomized Clinical Trial of Laparoscopic Versus Open Repair of the Perforated Peptic Ulcer: The LAMA Trial\n\n ABSTRACT.BACKGROUND:\nLaparoscopic surgery has become popular during the last decade, mainly because it is associated with fewer postoperative complications than the conventional open approach. It remains unclear, however, if this benefit is observed after laparoscopic correction of perforated peptic ulcer (PPU). The goal of the present study was to evaluate whether laparoscopic closure of a PPU is as safe as conventional open correction.\n\nABSTRACT.METHODS:\nThe study was based on a randomized controlled trial in which nine medical centers from the Netherlands participated. A total of 109 patients with symptoms of PPU and evidence of air under the diaphragm were scheduled to receive a PPU repair. After exclusion of 8 patients during the operation, outcomes were analyzed for laparotomy (n = 49) and for the laparoscopic procedure (n = 52).\n\nABSTRACT.RESULTS:\nOperating time in the laparoscopy group was significantly longer than in the open group (75 min versus 50 min). Differences regarding postoperative dosage of opiates and the visual analog scale (VAS) for pain scoring system were in favor of the laparoscopic procedure. The VAS score on postoperative days 1, 3, and 7 was significant lower (P < 0.05) in the laparoscopic group. Complications were equally distributed. Hospital stay was also comparable: 6.5 days in the laparoscopic group versus 8.0 days in the open group (P = 0.235).\n\nABSTRACT.CONCLUSIONS:\nLaparoscopic repair of PPU is a safe procedure compared with open repair. The results considering postoperative pain favor the laparoscopic procedure.\n\nBODY.INTRODUCTION:\nThe incidence of perforated peptic ulcer (PPU) has declined over the past several years because of the introduction of anti-ulcer medication and Helicobacter eradication therapy [1, 2]. Nevertheless the incidence and mortality of PPU is 5–10%. The mortality will increase up to 50% if the perforation exists for more than 24 h [3, 4]. There are several options for treatment of PPU, but the preferred treatment is surgery by upper abdominal laparotomy [5, 6]. Mouret et al. published the first results of laparoscopic repair in 1990 [7]. He concluded that it was a good method that probably reduced postoperative wound problems and adhesions. After the success of laparoscopic cholecystectomy and other laparoscopic procedures, it was thought that patients would have less pain and a shorter hospital stay after laparoscopic correction of PPU [8, 9]. Various studies have shown that laparoscopic suturing of the perforation is feasible, but there is still no proof of real benefits of laparoscopic correction [3, 6, 10–12]. Therefore we performed a multicenter randomized trial comparing open correction of PPU with laparoscopic repair.\n\nBODY.METHODS.PARTICIPANTS:\nPatients with symptoms of the clinical diagnosis of PPU were included in nine medical centers in the Netherlands participating in the LAMA (LAparoscopische MAagperforatie) trial between March 1999 and July 2005. Eligible patients were informed of the two surgical approaches and were invited to participate in the study. Exclusion criteria were the inability to read the Dutch language patient information booklet, inability to complete informed consent, prior upper abdominal surgery, and current pregnancy. The ethics committees of all participating institutions approved the trial.\n\nBODY.METHODS.RANDOMIZATION:\nSurgeons contacted the study coordinator after the patients had provided informed consent and randomization took place by opening a sealed envelope. The envelope randomization was based on a computer-generated list provided by the trial statistician.\n\nBODY.METHODS.SURGICAL PROCEDURE:\nAll patients received intravenous antibiotics prior to operation and were allocated for Helicobacter pylori eradication therapy according to established guidelines [13]. The open surgical procedure was performed through an upper abdominal midline incision. Closure of PPU was to be achieved by sutures alone or in combination with an omental patch. After repair of the defect cultures were drawn from the peritoneal fluid, after which the peritoneal cavity was lavaged. During lavage it was permissible to insufflate the stomach to test for leakage of the closed defect. No method was specified for closing the abdomen. Laparoscopic repair was performed with the patient and the team set up in the \"French\" position. Trocars were placed at the umbilicus (video scope) and on the left and right midclavicular line above the level of the umbilicus (instruments). If necessary a fourth trocar was placed in the subxiphoid space for lavage or retraction of the liver. Surgeons were free to use either 0° or 30° video scopes for the procedure. The rest of the procedure was identical to that described above for open repair. No method was specified for closing the trocar incisions.\n\nBODY.METHODS.POSTOPERATIVE FOLLOW-UP:\nPostoperative pain was scored by means of a visual analog scale (VAS) for pain on days 1, 3, 7, and 28 ranging from 0 (no pain) to 10 (severe pain). In addition, the days during which opiates were used by the patients were registered. All complications, minor and major, were monitored. The treating surgeons determined time of discharge on the basis of physical well-being, tolerance of a normal diet, and ability to use the stairs. For this reason, this was an unblinded trial. Postoperative hospital stay without correction for time spent in hospital as a result of non-medical reasons (inadequate care at home) was calculated. Patients were invited to attend the outpatient clinic at 4 weeks, 6 months, and one year postoperatively. They were asked to complete forms related to pain and use of analgesics.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData analysis was carried out according to the intention-to-treat principle as established in the trial protocol. Data were collected in a database, and statistical analyses were performed with the Statistical Package for Social Sciences for Windows (SPSS 15.0, SPSS Inc., Chicago, IL). A researcher blinded to the nature of the procedures performed all data analyses. The primary outcome of the trial was duration of hospital stay. The power analysis was performed on basis of a reduction in hospital stay by 1.5 days (10–8.5 days from admission) in favor of the laparoscopically treated group using a β of 0.80 and an α of 0.05. This resulted in a trial size of 50 patients per group. The Pearson chi-squared test was used to compare categorical variables, and the Mann-Whitney U-test was used to compare continuous variables as we could not assume normal distribution because of the relatively small numbers. In Tables 1–6 medians and interquartile ranges (IQR) are reported. All data were analyzed according to the intention-to-treat principle; i.e., patients remained in their assigned group even if during the procedure the surgeon judged the patient not to be suitable for the technique assigned or if conversion was required. Null hypotheses were tested two-sided and a P value of 0.05 or less was considered statistical significant.Table 1Baseline parametersLaparoscopic repairOpen repairP valuen = 52 n = 49 Male:female ratio1.3:11.9:1Median age (years) + IQR 66 (25.8)59 (29.5)0.185Median BMI (kg/m2) + IQR23 (4)22 (5)0.118Median duration of symptoms (h) + IQR11 (17)11 (19)0.948Median blood pressure systolic (mmHg) + IQR125 (38.5)130 (36.5)0.457Median blood pressure diastolic (mmHg) + IQR75 (25.5)75 (24.5)0.596Median heart rate (beats/min) + IQR 88 (34.0)92 (21)0.403Median body temperature (°C) + IQR 36.9 (0.92)36.8 (1.5)0.658Mannheim Peritonitis Index + IQR19.5 (8.25)16 (14)0.386Median white cell count (×109/l) + IQR 12.1 (8.9)12.1 (7.75)0.467Median ASA score + IQR 1.0 (1.0)1.5 (1.0)0.902IQR interquartile range, difference between 25th percentile and 75th percentile; BMI body mass indexASA American Society of Anesthesiologists Association score\n\nBODY.RESULTS.PATIENTS:\nA total of 109 patients were included in the trial based on a high suspicion of PPU (Fig. 1). Eight patients were excluded during operation because no gastric perforation was detected or a defect in other parts of the digestive tract was uncovered. Data for these patients were not collected and the patients were excluded from further analysis. The remaining 101 patients made up the study population; their baseline parameters are given in Table 1. Fifty-two patients were randomized for laparoscopic repair and 49 for open repair of the perforation. Forty patients were female. The mean age of the patients was 61 years. The BMI (body mass index) was equally distributed between the groups, with a median of 22.5. Patients in both groups had been suffering from symptoms for a mean duration of 11 h, and those in the laparoscopy group presented with a median Mannheim Peritonitis index [14] of 19.5, whereas those in the open group had a median Mannheim Peritonitis index of 16.Fig. 1Patient flow chart Thirty patients reported the use of non-steroidal anti-inflammatory drugs (NSAIDs; 17 laparoscopic versus 13 open), and 10 patients used proton pump inhibitors (6 laparoscopic versus 4 open). Patient history revealed gastric ulcer disease in 19 patients.\n\nBODY.RESULTS.INTRAOPERATIVE FINDINGS:\nThe discovered ulcer perforations were found to have a mean diameter of 10 mm, which did not differ between groups (Table 2). Location of the perforated ulcers was distributed equally between groups. Defects were located in the prepyloric region (n = 41), the postpyloric region (n = 34), and at the pylorus (n = 20). The median volume of lavage fluid used was 1,000 ml (range: 100–5,000 ml). The surgeon decided the amount of lavage used. There was no consensus on how much was necessary. Median blood loss did not differ between groups. Skin-to-skin time differed by 25 min, favoring open repair of PPU (Table 2).Table 2Intraoperative findingsLaparoscopic repairOpen repairP valuen = 52n = 49Median size of perforation (mm) + IQR10.0 (7.0)7.0 (6.0)0.379Number of patients with defect Pyloric812 Postpyloric2014 Prepyloric1922 Median volume of lavage (ml) + IQR1,000 (1,500)1,000 (1,425)1.000 Median bloodloss (ml) + IQR10.0 (40.0)10.0 (50.0)0.423 Skin to skin time (min) + IQR75 (47.2)50 (25.5)0.000\n\nBODY.RESULTS.INTRAOPERATIVE COMPLICATIONS:\nConversion to open surgery was required in four patients (8%). Reasons for conversion included the inability to visualize the ulcer defect because of bleeding (n = 1/52; 2%), inability to reach the defect because of perforation in the vicinity of the gastroduodenal ligament and because of a dorsal gastric ulcer (n = 2/52; 4%), and inability to find the perforation (n = 1/52; 2%).\n\nBODY.RESULTS.POSTOPERATIVE COMPLICATIONS:\nComplications were statistically equally distributed between the two groups (Table 3). There were 12 complications in 9 patients in the laparoscopic group and 24 complications in 15 patients in the open group. Mortality was 4% in the laparoscopic group and 8% in the open group. In the laparoscopic group death was caused by sepsis due to leakage at the repair site. In the open group 3 patients died because of pulmonary problems (ARDS, pneumonia), and 1 patient died after complications following a cerebrovascular accident (CVA) combined with respiratory insufficiency.Table 3Postoperative complicationsLaparoscopic repairOpen repairP valuen = 52n = 49Pneumonia21Respiratory insufficiency13ARDS1Cardiac problems22Sepsis31Leakage at repair site2Abscess3Ileus1Fascial dehiscence1Wound infection3Urinary tract infection2Incisional hernia1Cerebrovascular accident1Death24Total complications12240.061Total of patients with complications ≥19 (18%)15 (36%)\n\nBODY.RESULTS.DISCHARGE:\nTime to discharge was similar for the two groups, with a median difference of 1.5 days (Table 4). Nasogastric decompression could be stopped after 2–3 days in both groups (Table 4).Table 4Duration of hospital stay, nasogastric decompressionLaparoscopic repairOpen repairP valuen = 52n = 49Median hospital stay (days) + IQR6.5 (9.3)8.0 (7.3)0.235Median duration of nasogastric decompression (days) + IQR2.0 (3.0)3.0 (1.3)0.334\n\nBODY.RESULTS.PAIN:\nVisual analog pain scores were in favor of laparoscopic repair (Table 5; p < 0.005). Although the median duration of opiate use in the two groups was 1.0, the mean duration in the open group was found to be 0.6 days longer than in the laparoscopic group (Table 6).Table 5Postoperative painMedian VAS pain scoreMedian VAS pain scoreP valueLaparoscopic repairOpen repairDay 1 + IQR3.8 (3.0)5.15 (2.5)0.001Day 3 + IQR2.1 (2.5)3.0 (2.4)0.035Day 7 + IQR1.0 (2.0)1.85 (2.8)0.036Day 28 + IQR0.3 (0.7)0.0 (1.7)0.748Table 6Postoperative opiate usageOpiate requirementOpiate requirementP valueLaparoscopic repairOpen repairMedian duration (days) + IQR1.0 (1.25)1.0 (1.0)0.007Mean duration (days) ± SD1.0 ± 0.91.6 ± 0.90.007\n\nBODY.RESULTS.VAS APPEARANCE OF SCAR:\nThe VAS score for appearance of the scar left by the respective procedures (subjectively recorded in the same way as pain) differed by 2.3 points, favoring the laparoscopic procedure (7.7 vs. 5.4; P = 0.033)\n\nBODY.DISCUSSION:\nThe need for surgery for PPU has declined enormously in Europe and America with reported rates ranging from 50% to 80%, thanks to effective gastric acid-reducing medication [15]. For this reason, as well as because many surgeons prefer upper laparotomy, it took more time than expected to include 100 patients in our study. Reasons given by surgeons who prefer open repair were that it is a more familiar procedure and it can be completed faster than laparoscopy. It was also noted that patients often undergo operation at night, when the surgeon on call was not always laparoscopically trained. Other randomized trials have already shown the feasibility of laparoscopic repair of PPU [3, 4, 6, 10]. Only a few had more than 100 patients, and some emphasized results from subgroups of patients [8, 11, 12]. We did not subdivide our patients and included patients with risk factors for developing sepsis or conversion [10]. In eight of the original 109 patients (7%) it became evident during the operation that the patient had a diagnosis different from PPU. In the patients who were randomized for laparoscopy this discovery revealed the benefit of laparoscopy as a diagnostic procedure indicating either an upper or lower abdominoplasty or continuation of the laparoscopy [16]. Conversion rate in the laparoscopy group was 8% (4/52). This is much lower than that reported in literature, where conversion rates as high as 60% were found [3, 4, 6]. This maybe partially explained by the fact that only trained and experienced laparoscopic surgeons (those performing at least 50 laparoscopic procedures a year) participated in this trial, confirming the belief that this procedure should only be done by experienced surgeons [3–5]. Operating time was significantly longer in the laparoscopy group (75 min versus 50 min), which is comparable to reports in the literature [3, 10]. A possible explanation for the longer operative time is that laparoscopic suturing is more demanding [9, 17], especially if the edges of the perforation are infiltrated and friable. Sutures easily tear out and it is more difficult to take large bites and to tie knots properly. Use of a single-stitch method described by Siu et al. [18], fibrin glue, or a patch might solve this problem [12, 19]. Another reason for the increase in operating time is the irrigation procedure. Irrigating through a 5-mm or even a 10-mm trocar is time consuming, and suction of fluid decreases the volume of gas and reduces the pneumoperitoneum. There is no evidence that irrigation lowers the risk of sepsis [20], so it might only be necessary if there are food remnants in the abdomen; perhaps there is no need for it at all. One of the suspected risks of laparoscopic surgery is that of inducing sepsis by increasing bacterial translocation while establishing a pneumoperitoneum [6]. This danger could not be confirmed in our trial. Furthermore data suggest that there is a decrease in septic abdominal complications when laparosopic surgery is used [8]. Evidence already exists that laparoscopic correction of PPU causes less postoperative pain [6, 12, 17, 18]. The meta-analysis published by Lau [6] showed that eight out of ten studies showed a significant reduction in dosage of analgesics required in the laparoscopic group. Also, the three studies that had included VAS pain scores showed consistently lower pain scores, as was observed in our study as well. Whether this will lead to a better quality of life for patients, especially during the first weeks after surgery still needs to be analyzed. Although patients in our series who underwent laparoscopy had less postoperative pain, there was no difference in the length of hospital stay in our two groups. In fact, hospital stay overall in our patients was very long. This was most likely caused by the fact that many patients, especially the elderly, could not be discharged because of organizational reasons. Of the 101 patients, 41% were 70 years or older (24 in the laparoscopic group versus 17 in the open group). It appears that the age of PPU patients is increasing, and this will eventually represent a significant problem in the future [2, 3]. One benefit of the laparoscopic procedure not often mentioned in literature [6] is cosmetic outcome. Nowadays patients are aware of this benefit, and sometimes this is the reason why they demand laparoscopic surgery. In conclusion, the results of the LAMA trial confirm the results of other trials that laparoscopic correction of PPU is safe, feasible for the experienced laparoscopic surgeon, and causes less postoperative pain. Operating time was longer in the laparoscopic group and there was no difference in length of hospital stay or incidence of postoperative complications.\n\n**Question:** Compared to Conventional open approach what was the result of Laparoscopic surgery on Visual analog scale pain score at day 3?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
395
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Incisional hernia after upper abdominal surgery: a randomised controlled trial of midline versus transverse incision\n\n ABSTRACT.OBJECTIVES:\nTo determine whether a transverse incision is an alternative to a midline incision in terms of incisional hernia incidence, surgical site infection, postoperative pain, hospital stay and cosmetics in cholecystectomy.\n\nABSTRACT.SUMMARY BACKGROUND DATA:\nIncisional hernias after midline incision are commonly underestimated but probably complicate between 2 and 20% of all abdominal wall closures. The midline incision is the preferred incision for surgery of the upper abdomen despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia. A RCT was preformed in the pre-laparoscopic cholecystectomy era the data of which were never published.\n\nABSTRACT.METHODS:\nOne hundred and fifty female patients were randomly allocated to cholecystectomy through midline or transverse incision. Early complications, the duration to discharge and the in-hospital use of analgesics was noted. Patients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications such as fistula, wound dehiscence and incisional hernia after a minimum of 12 months follow-up.\n\nABSTRACT.RESULTS:\nTwo percent (1/60) of patients that had undergone the procedure through a transverse incision presented with an incisional hernia as opposed to 14% (9/63) of patients from the midline incision group (P = 0.017). Transverse incisions were found to be significantly shorter than midline incisions and associated with more pleasing appearance. More patients having undergone a midline incision, reported pain on day one, two and three postoperatively than patients from the transverse group. The use of analgesics did not differ between the two groups.\n\nABSTRACT.CONCLUSIONS:\nIn light of our results a transverse incision should, if possible, be considered as the preferred incision in acute and elective surgery of the upper abdomen when laparoscopic surgery is not an option.\n\nBODY.INTRODUCTION:\nThe rate of incisional hernia after midline incision is commonly underestimated but probably lies between 2 and 20% [1–5]. Thus, incisional hernia is a major postoperative problem. The treatment of incisional hernia is complicated by high rates of recurrences. Recently, in a randomised controlled trial published by Burger et al. [6], midline incisional hernia repair has been shown to be associated with a 10-year cumulative recurrence rate of 63 and 32% for suture and mesh repair, respectively. The midline incision is the preferred incision for surgery of the upper abdomen, despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia [7]. Various approaches to opening the abdomen have been advocated over time. The choice for a certain incision is dependent on the exposure necessary for the desired procedure to succeed. A midline incision, be it supraumbilical, infraumbilical or both, is an approach especially suited for emergency and exploratory surgery because of the quick and generous exposure that can be achieved within a few minutes [8, 9]. The avascular nature of the linea alba minimises blood loss during this procedure. A supraumbilical transverse incision may be utilised in case exposure of the upper abdomen is desired. During this incision, the damage inflicted to the segmental arteries and nerves is previously described as being minimal [10]. Previously, only one randomised controlled trial, comparing transverse and true midline incisions, has been published specifically addressing incisional hernia incidence [11]. To determine whether the use of a transverse incision is an alternative to a midline incision for open cholecystectomy in terms of incisional hernia incidence, surgical site infection, postoperative pain and hospital stay, this randomised controlled trial was performed. This trial was conducted in an era when laparoscopic cholecystectomy was not yet available. The possibility of low incisional hernia rates after transverse incisions and the fact that little is known about potential advantages incited us to publish the relevant results of this randomised controlled trial which has been performed in the past and has only been reported in a Dutch thesis by one of the authors (H.L.). The primary endpoint of this study was the incisional hernia incidence after 12 months of follow-up. Secondary endpoints included pain and cosmetic appearance.\n\nBODY.METHODS.PROTOCOL:\nSome 150 consecutive female patients were randomly assigned to a midline or transverse incision as an approach for elective cholecystectomy or combined cholecystectomy and cholangiography (with or without consecutive choledochotomy) (75 and 75 patients, respectively). Emergency procedures were excluded from participation. The sample size is based on an incisional hernia rate reduction from 20 to 6% at a power of 80% and an error rate of 5%. Obtaining informed consent was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975. The investigation reported was performed with informed consent from all of the patients and followed the guidelines for experimental investigation with human subjects and was approved by the medical ethics committee. An independent statistician prepared closed, tamper-proof envelopes containing the random allocation (Fig. 1). Patients were randomised for one of the procedures in theatre through the opening of the envelopes.Fig. 1Flow chart of patient inclusion and follow-up Patient-related factors that were recorded were age, body mass and length and date of operation. Operation-related factors that were recorded were the exact nature of the operation, length of the incision, the thickness of the subcutaneous fat, surgeon performing the procedure, as well as the duration of the operation (skin-to-skin time). In the immediate postoperative period, the use, dose and type of analgesics was recorded and a pain score was administered. The use of analgesics (morphine 7.5 mg intra-muscular injection, 4 h minimum interval between consecutive injections) was monitored for 48 h after surgery; the pain score was administered for the first 6 days after surgery. In patients assigned to surgery through a midline incision, the skin was incised from just below the xyphoid process to just above the umbilicus. The abdominal wall was opened in the midline by incising the linea alba. A Collin type (two-bladed) self-retaining retractor was used to maintain exposure. The abdominal wall was closed in one layer using single polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). The skin was consequently closed using running monofilament nylon sutures (Ethilon; Ethicon, Amersfoort, The Netherlands). Patients randomised for a transverse incision received a right-sided unilateral transverse incision between 3 and 4 cm below the costal margin. The rectus muscle was incised. The fibres of the external and internal obliques and the transverse muscles were separated in the direction of their course. Exposure was achieved through the use of a manually held single-bladed retractor. Closure of the abdominal wall was achieved by closure of the peritoneum and the posterior rectus fascia using a continuous, polygalactin 910 suture (Vicryl; Ethicon, Amersfoort, The Netherlands). The anterior rectus sheath and the fascia of the internal and external transverses were closed using simple interrupted polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). Towards the end of both procedures, a Redon low-vacuum drain catheter was placed, which was guided outside the abdominal cavity approximately 5 cm from the incision. The skin was consequently closed using continuous monofilament nylon suture (Ethilon; Ethicon, Amersfoort, The Netherlands). All patients received a dose of 5,000 IU of sodium–heparin on the morning of the procedure as thrombosis prophylaxis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Pearson χ2 test was used for comparing percentages. In case of small expected numbers, a Fisher's exact test was performed. Continuous variables were analysed using the Mann–Whitney test. A P-value of 0.05 or less (two-sided) was considered to be statistically significant. Means and medians are expressed ±standard deviation (SD).\n\nBODY.METHODS.FOLLOW-UP:\nPatients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications, such as fistula, wound dehiscence and incisional hernia, after a minimum of 12 months follow-up. The patient and the surgeon evaluated the cosmetic results independently and were asked to rate the scar as unsatisfactory, satisfactory or fine. Furthermore, the length and width of the scar was measured.\n\nBODY.RESULTS.STUDY GROUP:\nSome 150 consecutive patients were randomised for participation in this study during an inclusion period from April 1977 until July 1979. Seventy-five patients received a transverse incision and 75 patients a midline incision (Fig. 1). One patient was withdrawn from further follow-up after developing peritonitis and consequent acute respiratory distress syndrome (ARDS) not related to the closure of the abdominal wall 2 days after surgery (transverse incision group). The patients' average age was 51.9 and 51.4 years for the midline and the transverse incision groups, respectively. Furthermore, no differences were found in the body mass and average length between the two groups (Table 1). A cholecystectomy was performed using a transverse incision in 52 patients and utilising a midline incision in 52 patients also. Fifteen and 16 patients, respectively, underwent a combined cholangiography/cholecystectomy. A further 7 and 6 patients, respectively, were treated with a cholangiography/cholecystectomy plus additional choledochotomy and the postexploratory placement of a T-tube.Table 1Baseline characteristics of the patients undergoing surgery, according to study groupVariableMidline incisionTransverse incisionn = 75n = 74Average age (years) ± SD51.9 ± 14.851.4 ± 13.8Average weight (kg) ± SD71.3 ± 14.568 ± 14.3Average length (cm) ± SD163.5 ± 7.8164 ± 7.3\n\nBODY.RESULTS.SURGEON:\nStaff surgeons performed 17% (13/75 patients) of all procedures performed through a midline incision. The remainder of the procedures through a midline incision was carried out under staff surgeon supervision. Staff surgeons performed 14% of all procedures in the transverse incisions study group (10/74 patients) and supervised the remainder. No statistically significant difference was found between the two randomised groups (P = 0.65).\n\nBODY.RESULTS.DURATION OF SURGERY:\nNo significant difference was noted in the skin-to-skin time (in min) for the two different incisions (Table 2). Surgery utilising midline and transverse incision took 56.9 ± 29.3 and 53.2 ± 26.8 min, respectively (P = 0.35). The total duration of the procedures until extubation (in min) did not differ between the midline and transverse incisions (71.0 ± 30.5 and 67.0 ± 27.3, respectively, P = 0.34).Table 2Length of incision, thickness of subcutaneous fat and skin-to-skin time, according to study groupVariableMidline incisionTransverse incisionP-valueLength of incision (mm) ± SDa164 ± 28140 ± 24<0.0001Thickness of subcutaneous fat (mm) ± SDa34.5 ± 13.030.3 ± 12.40.05Skin-to-skin time (min) ± SDa56.9 ± 29.353.2 ± 26.80.40Width of scar (mm) ± SDb8.3 ± 1.43.3 ± 1.2<0.0001aMeasured during surgery in 75 midline and 74 transverse incisionsbMeasured at follow-up in 63 and 60 midline and transverse incisions, respectively\n\nBODY.RESULTS.PAIN AND ANALGESICS:\nSignificantly more patients, having undergone a midline incision, reported pain on day one, two and three postoperatively (P < 0.0001, Table 3). In the midline incision group, 28/75 patients required no or only one dose of analgesics; the remainder required two or more doses. Thirty-one patients operated through a transverse incision required no analgesics or only one dose; 43 patients (the remainder) required two or more. No significant difference in the use of analgesics was found between the groups (P = 0.69).Table 3Postoperatively reported pain, according to study group, shown as the number of patients reporting pain at the time points indicated (percentage), with the remainder of patients reporting no painTime point after surgeryMidline incision n = 75Transverse incision n = 74P-valuePatients reporting pain, n (%)Patients reporting pain, n (%)3–4 h68 (91)60 (81)0.09First day64 (85)39 (53)<0.0001Second day57 (76)23 (31)<0.0001Third day28 (37)9 (12)<0.0001Fourth day5 (7)3 (4)0.72Fifth day0 (0)1 (1)0.50Sixth day0 (0)1 (1)0.50\n\nBODY.RESULTS.COMPLICATIONS:\nPostoperative complications (Table 4) were seen in 16 out of 75 patients (21%) from the midline incision group and in 15% from the transverse incision group (11 patients) (P = 0.30). Briefly, one patient in each group developed cardiac complications; 8 and 6 patients developed urinary retention after the midline and transverse incisions, respectively (P = 0.59). Surgical site infections were diagnosed in 7 and 3 patients, respectively (P = 0.33).Table 4Rate of complications after surgery, according to study group, shown as the number of patients diagnosed with complications (percentage)ComplicationMidline incisionTransverse incisionP-valuen = 75 n (%)n = 75 n (%)Cardiac1 (1)1 (1)1Urinary retention8 (12)6 (8)0.59ARDS01 (1)0.50Surgical site infection7 (9)3 (4)0.33Haemorrhage1 (1)00.50Pneumonia01 (1)0.50Total17 (23)12 (16)0.30\n\nBODY.RESULTS.DISCHARGE:\nForty-five (60%) and 42 (57%) patients from the patients having undergone a midline or a transverse incision, respectively, were discharged on day 6 or 7 postoperatively. The remaining patients from each group left hospital care on day 8 or later. The duration of hospital admission did not differ between the two types of incision (P = 0.74).\n\nBODY.RESULTS.COSMETICS:\nThe width and length of all incisions was measured during the follow-up visit (Table 2). The mean width of the scar after the healing of the midline incisions was found to be 8.3 ± 1.4 mm. The mean width of the scar after the healing of the transverse incisions was measured to be 3.3 ± 1.2 mm. This observed difference is significant (P < 0.0001). The length of the incisions was 140 ± 24 mm and 164 ± 28 mm for the transverse and the midline incisions, respectively. The difference in scar length was found to be significant (P < 0.0001).\n\nBODY.FOLLOW-UP:\nEighty-one percent of all patients operated through a transverse incision were seen during the follow-up examination (n = 60). Of the patients operated through a midline incision, 63 out of 75 were seen at the outpatient clinic (84%). The patients that were lost to follow-up could either not be traced or had deceased (Fig. 1). The minimum follow-up for the evaluation of cosmetic results and hernia incidence was 12 months and the maximum was 36 months.\n\nBODY.FOLLOW-UP.INCISIONAL HERNIA:\nFrom the patients that had undergone the procedure through a transverse incision, one (1/60; 2%) presented with an incisional hernia as opposed to 9 patients from the midline incision group (9/63; 14%); 95% confidence interval (CI) 7.5–25.4%. This difference in hernia incidence is significant (P = 0.017). No significant correlation was found between the incisional hernia rate and surgical site infection (P = 0.07).\n\nBODY.FOLLOW-UP.SUBJECTIVE COSMETICS:\nPatients and surgeons alike were asked to rate the appearance of the scar during the postoperative follow-up outpatient clinic visit. Both the surgeons and the patients found the scar resulting from the transverse incision to be more cosmetically pleasing (P < 0.0001 and P = 0.03, respectively, Table 5).Table 5Number of patients and surgeons rating the cosmetics of a scar at follow-upScoreMidline incision (n = 63)Transverse incision (n = 60)Patients, n (%)Surgeons, n (%)Patients, n (%)Surgeons, n (%)Unsatisfactory6 (10)25 (40)2 (3)6 (10)Satisfactory16 (25)27 (43)9 (15)12 (20)Fine41 (65)11 (17)49 (82)42 (70)Total63636060Difference between type of incision: patients P = 0.03; surgeons P < 0.0001\n\nBODY.DISCUSSION:\nThis prospective randomised study of transverse and midline incisions for open cholecystectomy shows that a significant reduction of incisional hernia incidence can be achieved through the use of a transverse incision. Only one other study (published in 1980) reported the incidence of incisional hernia after upper abdominal midline and unilateral transverse incision in a randomised trial. No difference between the two techniques (8 and 6% incisional hernia, respectively) was found, but the relatively short follow-up of 6 months, however, may be held accountable for this finding [11]. Three retrospective studies showed rates of incisional hernia of 3.2, 5.4 and 16.5% for midline incision and 1.3, 6.7 and 13.4% for transverse incision without statistically significant differences [12–14]. The possible reason for the rather high incidence of incisional hernia in the midline incision group (14%) may lie in the use of resorbable 910 polygalactin sutures. Nevertheless, the use of the same type of resorbable suture in the closure of the transverse incisions resulted in a 2% hernia rate. There is evidence for the importance of proper technique and choice of incision as a means to reduce incisional hernia being more important than the use of suture material [7]. Furthermore, as mentioned above, it is known that the incidence of incisional hernia in the case of a midline incision lies between 2 and 20%. From our data, the NNT (numbers needed to treat) is calculated to be 8 (95% CI 5–30) and the RRR (relative risk reduction) is 88% (95% CI 23–100%). Luijendijk et al. [15] have published a hernia rate of 2% after Pfannenstiel incisions closed using 910 polygalactin, which is in agreement with our findings in the patients randomised for a transverse incision, emphasising the importance of the incision over the choice of suture material. In our study, significantly fewer patients reported pain on day 1, 2 and 3 after transverse incisions, a result that was also described by other authors [16, 17]. Greenall et al. [18] published a contradictory report (in 1980) in which no significant difference in postoperative pain was found between midline and transverse incisions. The previously mentioned study, however, only analysed 46 out of 572 patients (8%) with regard to pain, which may explain the finding. In the same way, Lacy et al. suspended visual analogue pain scoring in a study comparing midline and transverse incisions for abdominal aortic surgery. Remarkably, the two groups in our study did not differ in terms of postoperative analgesia, a finding that is also reported by Lindgren et al. [17] and Donati et al. [19]. In our study, surgeons as well as patients were significantly more satisfied with the aesthetic appearance after a transverse in comparison with a midline incision. The scars after transverse incisions were found to be significantly shorter and less wide than the midline incisions, which may account for the observed difference. A possible reason for this is that a transverse incision is executed parallel to the prevailing direction of the skin lines on the abdomen and, therefore, the tension on the wound and consequent scar is low. Cholecystectomy has come a long way since this trial. The introduction and widespread acceptance of laparoscopic technique as the treatment of choice has rendered open cholecystectomy to be an operation for exceptional, and perhaps surgically difficult, circumstances. Nowadays, the study reported is hardly feasible, yet, the results are still applicable and very relevant for other surgical procedures in the (upper) abdomen. Knowledge of the favourable results of a transverse incision may aid surgeons in their choice when finding themselves in the unfortunate position of needing conversion to open cholecystectomy. In conclusion, this investigation on transverse incisions might be helpful in reducing the incidence of incisional hernia in patients after open cholecystectomy. The midline incision is a preferred manner to achieve exposure of the abdominal cavity and is considered to be easily performed and quick. Although the midline incision is generally accepted, the incidence of incisional hernias is surprisingly high [1–5]. The choice for a particular incision should not only be based on exposure, but also on hernia incidence reduction, especially since recurrence rates after hernia repair are reported to be very high. Furthermore, the recurrence rate after incisional hernia repair is a disappointing 63 and 32% for suture and mesh repair, respectively [6]. In the light of these results, incisional hernia prevention is warranted. In this investigation, it is shown that a significant reduction (from 14.5 to 1.7%) of incisional hernia incidence was achieved by using a transverse incision. Hence, a transverse incision should be considered as the preferred incision in acute and elective surgery of the upper abdomen in which laparoscopic surgery is not an option. Full exposure of two quadrants is feasible through the use of a unilateral transverse incision in, for example, biliary, bariatric, liver and colonic surgery. The transverse incision should be part of the abdominal surgeon's armamentarium and is a preferable incision to prevent the high incidence of incisional hernia after abdominal surgery.\n\n**Question:** Compared to Midline incision what was the result of Transverse incision on Average patient's weight?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
264
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomized Clinical Trial of Laparoscopic Versus Open Repair of the Perforated Peptic Ulcer: The LAMA Trial\n\n ABSTRACT.BACKGROUND:\nLaparoscopic surgery has become popular during the last decade, mainly because it is associated with fewer postoperative complications than the conventional open approach. It remains unclear, however, if this benefit is observed after laparoscopic correction of perforated peptic ulcer (PPU). The goal of the present study was to evaluate whether laparoscopic closure of a PPU is as safe as conventional open correction.\n\nABSTRACT.METHODS:\nThe study was based on a randomized controlled trial in which nine medical centers from the Netherlands participated. A total of 109 patients with symptoms of PPU and evidence of air under the diaphragm were scheduled to receive a PPU repair. After exclusion of 8 patients during the operation, outcomes were analyzed for laparotomy (n = 49) and for the laparoscopic procedure (n = 52).\n\nABSTRACT.RESULTS:\nOperating time in the laparoscopy group was significantly longer than in the open group (75 min versus 50 min). Differences regarding postoperative dosage of opiates and the visual analog scale (VAS) for pain scoring system were in favor of the laparoscopic procedure. The VAS score on postoperative days 1, 3, and 7 was significant lower (P < 0.05) in the laparoscopic group. Complications were equally distributed. Hospital stay was also comparable: 6.5 days in the laparoscopic group versus 8.0 days in the open group (P = 0.235).\n\nABSTRACT.CONCLUSIONS:\nLaparoscopic repair of PPU is a safe procedure compared with open repair. The results considering postoperative pain favor the laparoscopic procedure.\n\nBODY.INTRODUCTION:\nThe incidence of perforated peptic ulcer (PPU) has declined over the past several years because of the introduction of anti-ulcer medication and Helicobacter eradication therapy [1, 2]. Nevertheless the incidence and mortality of PPU is 5–10%. The mortality will increase up to 50% if the perforation exists for more than 24 h [3, 4]. There are several options for treatment of PPU, but the preferred treatment is surgery by upper abdominal laparotomy [5, 6]. Mouret et al. published the first results of laparoscopic repair in 1990 [7]. He concluded that it was a good method that probably reduced postoperative wound problems and adhesions. After the success of laparoscopic cholecystectomy and other laparoscopic procedures, it was thought that patients would have less pain and a shorter hospital stay after laparoscopic correction of PPU [8, 9]. Various studies have shown that laparoscopic suturing of the perforation is feasible, but there is still no proof of real benefits of laparoscopic correction [3, 6, 10–12]. Therefore we performed a multicenter randomized trial comparing open correction of PPU with laparoscopic repair.\n\nBODY.METHODS.PARTICIPANTS:\nPatients with symptoms of the clinical diagnosis of PPU were included in nine medical centers in the Netherlands participating in the LAMA (LAparoscopische MAagperforatie) trial between March 1999 and July 2005. Eligible patients were informed of the two surgical approaches and were invited to participate in the study. Exclusion criteria were the inability to read the Dutch language patient information booklet, inability to complete informed consent, prior upper abdominal surgery, and current pregnancy. The ethics committees of all participating institutions approved the trial.\n\nBODY.METHODS.RANDOMIZATION:\nSurgeons contacted the study coordinator after the patients had provided informed consent and randomization took place by opening a sealed envelope. The envelope randomization was based on a computer-generated list provided by the trial statistician.\n\nBODY.METHODS.SURGICAL PROCEDURE:\nAll patients received intravenous antibiotics prior to operation and were allocated for Helicobacter pylori eradication therapy according to established guidelines [13]. The open surgical procedure was performed through an upper abdominal midline incision. Closure of PPU was to be achieved by sutures alone or in combination with an omental patch. After repair of the defect cultures were drawn from the peritoneal fluid, after which the peritoneal cavity was lavaged. During lavage it was permissible to insufflate the stomach to test for leakage of the closed defect. No method was specified for closing the abdomen. Laparoscopic repair was performed with the patient and the team set up in the \"French\" position. Trocars were placed at the umbilicus (video scope) and on the left and right midclavicular line above the level of the umbilicus (instruments). If necessary a fourth trocar was placed in the subxiphoid space for lavage or retraction of the liver. Surgeons were free to use either 0° or 30° video scopes for the procedure. The rest of the procedure was identical to that described above for open repair. No method was specified for closing the trocar incisions.\n\nBODY.METHODS.POSTOPERATIVE FOLLOW-UP:\nPostoperative pain was scored by means of a visual analog scale (VAS) for pain on days 1, 3, 7, and 28 ranging from 0 (no pain) to 10 (severe pain). In addition, the days during which opiates were used by the patients were registered. All complications, minor and major, were monitored. The treating surgeons determined time of discharge on the basis of physical well-being, tolerance of a normal diet, and ability to use the stairs. For this reason, this was an unblinded trial. Postoperative hospital stay without correction for time spent in hospital as a result of non-medical reasons (inadequate care at home) was calculated. Patients were invited to attend the outpatient clinic at 4 weeks, 6 months, and one year postoperatively. They were asked to complete forms related to pain and use of analgesics.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData analysis was carried out according to the intention-to-treat principle as established in the trial protocol. Data were collected in a database, and statistical analyses were performed with the Statistical Package for Social Sciences for Windows (SPSS 15.0, SPSS Inc., Chicago, IL). A researcher blinded to the nature of the procedures performed all data analyses. The primary outcome of the trial was duration of hospital stay. The power analysis was performed on basis of a reduction in hospital stay by 1.5 days (10–8.5 days from admission) in favor of the laparoscopically treated group using a β of 0.80 and an α of 0.05. This resulted in a trial size of 50 patients per group. The Pearson chi-squared test was used to compare categorical variables, and the Mann-Whitney U-test was used to compare continuous variables as we could not assume normal distribution because of the relatively small numbers. In Tables 1–6 medians and interquartile ranges (IQR) are reported. All data were analyzed according to the intention-to-treat principle; i.e., patients remained in their assigned group even if during the procedure the surgeon judged the patient not to be suitable for the technique assigned or if conversion was required. Null hypotheses were tested two-sided and a P value of 0.05 or less was considered statistical significant.Table 1Baseline parametersLaparoscopic repairOpen repairP valuen = 52 n = 49 Male:female ratio1.3:11.9:1Median age (years) + IQR 66 (25.8)59 (29.5)0.185Median BMI (kg/m2) + IQR23 (4)22 (5)0.118Median duration of symptoms (h) + IQR11 (17)11 (19)0.948Median blood pressure systolic (mmHg) + IQR125 (38.5)130 (36.5)0.457Median blood pressure diastolic (mmHg) + IQR75 (25.5)75 (24.5)0.596Median heart rate (beats/min) + IQR 88 (34.0)92 (21)0.403Median body temperature (°C) + IQR 36.9 (0.92)36.8 (1.5)0.658Mannheim Peritonitis Index + IQR19.5 (8.25)16 (14)0.386Median white cell count (×109/l) + IQR 12.1 (8.9)12.1 (7.75)0.467Median ASA score + IQR 1.0 (1.0)1.5 (1.0)0.902IQR interquartile range, difference between 25th percentile and 75th percentile; BMI body mass indexASA American Society of Anesthesiologists Association score\n\nBODY.RESULTS.PATIENTS:\nA total of 109 patients were included in the trial based on a high suspicion of PPU (Fig. 1). Eight patients were excluded during operation because no gastric perforation was detected or a defect in other parts of the digestive tract was uncovered. Data for these patients were not collected and the patients were excluded from further analysis. The remaining 101 patients made up the study population; their baseline parameters are given in Table 1. Fifty-two patients were randomized for laparoscopic repair and 49 for open repair of the perforation. Forty patients were female. The mean age of the patients was 61 years. The BMI (body mass index) was equally distributed between the groups, with a median of 22.5. Patients in both groups had been suffering from symptoms for a mean duration of 11 h, and those in the laparoscopy group presented with a median Mannheim Peritonitis index [14] of 19.5, whereas those in the open group had a median Mannheim Peritonitis index of 16.Fig. 1Patient flow chart Thirty patients reported the use of non-steroidal anti-inflammatory drugs (NSAIDs; 17 laparoscopic versus 13 open), and 10 patients used proton pump inhibitors (6 laparoscopic versus 4 open). Patient history revealed gastric ulcer disease in 19 patients.\n\nBODY.RESULTS.INTRAOPERATIVE FINDINGS:\nThe discovered ulcer perforations were found to have a mean diameter of 10 mm, which did not differ between groups (Table 2). Location of the perforated ulcers was distributed equally between groups. Defects were located in the prepyloric region (n = 41), the postpyloric region (n = 34), and at the pylorus (n = 20). The median volume of lavage fluid used was 1,000 ml (range: 100–5,000 ml). The surgeon decided the amount of lavage used. There was no consensus on how much was necessary. Median blood loss did not differ between groups. Skin-to-skin time differed by 25 min, favoring open repair of PPU (Table 2).Table 2Intraoperative findingsLaparoscopic repairOpen repairP valuen = 52n = 49Median size of perforation (mm) + IQR10.0 (7.0)7.0 (6.0)0.379Number of patients with defect Pyloric812 Postpyloric2014 Prepyloric1922 Median volume of lavage (ml) + IQR1,000 (1,500)1,000 (1,425)1.000 Median bloodloss (ml) + IQR10.0 (40.0)10.0 (50.0)0.423 Skin to skin time (min) + IQR75 (47.2)50 (25.5)0.000\n\nBODY.RESULTS.INTRAOPERATIVE COMPLICATIONS:\nConversion to open surgery was required in four patients (8%). Reasons for conversion included the inability to visualize the ulcer defect because of bleeding (n = 1/52; 2%), inability to reach the defect because of perforation in the vicinity of the gastroduodenal ligament and because of a dorsal gastric ulcer (n = 2/52; 4%), and inability to find the perforation (n = 1/52; 2%).\n\nBODY.RESULTS.POSTOPERATIVE COMPLICATIONS:\nComplications were statistically equally distributed between the two groups (Table 3). There were 12 complications in 9 patients in the laparoscopic group and 24 complications in 15 patients in the open group. Mortality was 4% in the laparoscopic group and 8% in the open group. In the laparoscopic group death was caused by sepsis due to leakage at the repair site. In the open group 3 patients died because of pulmonary problems (ARDS, pneumonia), and 1 patient died after complications following a cerebrovascular accident (CVA) combined with respiratory insufficiency.Table 3Postoperative complicationsLaparoscopic repairOpen repairP valuen = 52n = 49Pneumonia21Respiratory insufficiency13ARDS1Cardiac problems22Sepsis31Leakage at repair site2Abscess3Ileus1Fascial dehiscence1Wound infection3Urinary tract infection2Incisional hernia1Cerebrovascular accident1Death24Total complications12240.061Total of patients with complications ≥19 (18%)15 (36%)\n\nBODY.RESULTS.DISCHARGE:\nTime to discharge was similar for the two groups, with a median difference of 1.5 days (Table 4). Nasogastric decompression could be stopped after 2–3 days in both groups (Table 4).Table 4Duration of hospital stay, nasogastric decompressionLaparoscopic repairOpen repairP valuen = 52n = 49Median hospital stay (days) + IQR6.5 (9.3)8.0 (7.3)0.235Median duration of nasogastric decompression (days) + IQR2.0 (3.0)3.0 (1.3)0.334\n\nBODY.RESULTS.PAIN:\nVisual analog pain scores were in favor of laparoscopic repair (Table 5; p < 0.005). Although the median duration of opiate use in the two groups was 1.0, the mean duration in the open group was found to be 0.6 days longer than in the laparoscopic group (Table 6).Table 5Postoperative painMedian VAS pain scoreMedian VAS pain scoreP valueLaparoscopic repairOpen repairDay 1 + IQR3.8 (3.0)5.15 (2.5)0.001Day 3 + IQR2.1 (2.5)3.0 (2.4)0.035Day 7 + IQR1.0 (2.0)1.85 (2.8)0.036Day 28 + IQR0.3 (0.7)0.0 (1.7)0.748Table 6Postoperative opiate usageOpiate requirementOpiate requirementP valueLaparoscopic repairOpen repairMedian duration (days) + IQR1.0 (1.25)1.0 (1.0)0.007Mean duration (days) ± SD1.0 ± 0.91.6 ± 0.90.007\n\nBODY.RESULTS.VAS APPEARANCE OF SCAR:\nThe VAS score for appearance of the scar left by the respective procedures (subjectively recorded in the same way as pain) differed by 2.3 points, favoring the laparoscopic procedure (7.7 vs. 5.4; P = 0.033)\n\nBODY.DISCUSSION:\nThe need for surgery for PPU has declined enormously in Europe and America with reported rates ranging from 50% to 80%, thanks to effective gastric acid-reducing medication [15]. For this reason, as well as because many surgeons prefer upper laparotomy, it took more time than expected to include 100 patients in our study. Reasons given by surgeons who prefer open repair were that it is a more familiar procedure and it can be completed faster than laparoscopy. It was also noted that patients often undergo operation at night, when the surgeon on call was not always laparoscopically trained. Other randomized trials have already shown the feasibility of laparoscopic repair of PPU [3, 4, 6, 10]. Only a few had more than 100 patients, and some emphasized results from subgroups of patients [8, 11, 12]. We did not subdivide our patients and included patients with risk factors for developing sepsis or conversion [10]. In eight of the original 109 patients (7%) it became evident during the operation that the patient had a diagnosis different from PPU. In the patients who were randomized for laparoscopy this discovery revealed the benefit of laparoscopy as a diagnostic procedure indicating either an upper or lower abdominoplasty or continuation of the laparoscopy [16]. Conversion rate in the laparoscopy group was 8% (4/52). This is much lower than that reported in literature, where conversion rates as high as 60% were found [3, 4, 6]. This maybe partially explained by the fact that only trained and experienced laparoscopic surgeons (those performing at least 50 laparoscopic procedures a year) participated in this trial, confirming the belief that this procedure should only be done by experienced surgeons [3–5]. Operating time was significantly longer in the laparoscopy group (75 min versus 50 min), which is comparable to reports in the literature [3, 10]. A possible explanation for the longer operative time is that laparoscopic suturing is more demanding [9, 17], especially if the edges of the perforation are infiltrated and friable. Sutures easily tear out and it is more difficult to take large bites and to tie knots properly. Use of a single-stitch method described by Siu et al. [18], fibrin glue, or a patch might solve this problem [12, 19]. Another reason for the increase in operating time is the irrigation procedure. Irrigating through a 5-mm or even a 10-mm trocar is time consuming, and suction of fluid decreases the volume of gas and reduces the pneumoperitoneum. There is no evidence that irrigation lowers the risk of sepsis [20], so it might only be necessary if there are food remnants in the abdomen; perhaps there is no need for it at all. One of the suspected risks of laparoscopic surgery is that of inducing sepsis by increasing bacterial translocation while establishing a pneumoperitoneum [6]. This danger could not be confirmed in our trial. Furthermore data suggest that there is a decrease in septic abdominal complications when laparosopic surgery is used [8]. Evidence already exists that laparoscopic correction of PPU causes less postoperative pain [6, 12, 17, 18]. The meta-analysis published by Lau [6] showed that eight out of ten studies showed a significant reduction in dosage of analgesics required in the laparoscopic group. Also, the three studies that had included VAS pain scores showed consistently lower pain scores, as was observed in our study as well. Whether this will lead to a better quality of life for patients, especially during the first weeks after surgery still needs to be analyzed. Although patients in our series who underwent laparoscopy had less postoperative pain, there was no difference in the length of hospital stay in our two groups. In fact, hospital stay overall in our patients was very long. This was most likely caused by the fact that many patients, especially the elderly, could not be discharged because of organizational reasons. Of the 101 patients, 41% were 70 years or older (24 in the laparoscopic group versus 17 in the open group). It appears that the age of PPU patients is increasing, and this will eventually represent a significant problem in the future [2, 3]. One benefit of the laparoscopic procedure not often mentioned in literature [6] is cosmetic outcome. Nowadays patients are aware of this benefit, and sometimes this is the reason why they demand laparoscopic surgery. In conclusion, the results of the LAMA trial confirm the results of other trials that laparoscopic correction of PPU is safe, feasible for the experienced laparoscopic surgeon, and causes less postoperative pain. Operating time was longer in the laparoscopic group and there was no difference in length of hospital stay or incidence of postoperative complications.\n\n**Question:** Compared to Conventional open approach what was the result of Laparoscopic surgery on Visual analog scale pain score at day 1?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
370
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Formoterol DPI 9 μg what was the result of Budesonide/formoterol pMDI 320/9 μg on Cough score improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
479
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Improved outcomes from the administration of progesterone for patients with acute severe traumatic brain injury: a randomized controlled trial\n\n ABSTRACT.BACKGROUND:\nSevere traumatic brain injury (TBI) has been increasing with greater incidence of injuries from traffic or sporting accidents. Although there are a number of animal models of TBI using progesterone for head injury, the effects of progesterone on neurologic outcome of acute TBI patients remain unclear. The aim of the present clinical study was to assess the longer-term efficacy of progesterone on the improvement in neurologic outcome of patients with acute severe TBI.\n\nABSTRACT.METHODS:\nA total of 159 patients who arrived within 8 hours of injury with a Glasgow Coma Score ≤ 8 were enrolled in the study. A prospective, randomized, placebo-controlled trial of progesterone was conducted in the Neurotrauma Center of our teaching hospital. The patients were randomized to receive either progesterone or placebo. The primary endpoint was the Glasgow Outcome Scale score 3 months after brain injury. Secondary efficacy endpoints included the modified Functional Independence Measure score and mortality. In a follow-up protocol at 6 months, the Glasgow Outcome Scale and the modified Functional Independence Measure scores were again determined.\n\nABSTRACT.RESULTS:\nOf the 159 patients randomized, 82 received progesterone and 77 received placebo. The demographic characteristics, the mechanism of injury, and the time of treatment were compared for the two groups. After 3 months and 6 months of treatment, the dichotomized Glasgow Outcome Scale score analysis exhibited more favorable outcomes among the patients who were given progesterone compared with the control individuals (P = 0.034 and P = 0.048, respectively). The modified Functional Independence Measure scores in the progesterone group were higher than those in the placebo group at both 3-month and 6-month follow-up (P < 0.05 and P < 0.01). The mortality rate of the progesterone group was significantly lower than that of the placebo group at 6-month follow-up (P < 0.05). The mean intracranial pressure values 72 hours and 7 days after injury were lower in the progesterone group than in the placebo group, but there was no statistical significance between the two groups (P > 0.05). Instances of complications and adverse events associated with the administration of progesterone were not found.\n\nABSTRACT.CONCLUSION:\nOur data suggest that acute severe TBI patients with administration of progesterone hold improved neurologic outcomes for up to 6 months. These results provide information important for further large and multicenter clinical trials on progesterone as a promising neuroprotective drug.\n\nABSTRACT.TRIAL REGISTRATION:\nACTRN12607000545460.\n\nBODY.INTRODUCTION:\nTraumatic brain injury (TBI) remains one of the leading causes of injury-related death and severe disability. The management of TBI currently includes preventing further neurological insults, managing the intracranial pressure (ICP), and surgical procedures. It is very important to search for clinically effective neuroprotective drugs to prevent secondary brain injury after TBI. In spite of many neuroprotective agents showing efficacy in experimental models of TBI, none has produced significant neuronal protection when tested in clinical trials [1,2]. Progesterone, a hormone, has steroidal, neuroactive and neurosteroidal action in the center neuronal system. Neuroprotective effects of progesterone have recently been shown in a variety of animal models, including ischemic and traumatic brain insult models [3-6]. Postinjury administration of progesterone in experimental models of head injury confers significant protection against TBI-induced cerebral edema and secondary neuronal death, promoting behavioral recovery [7,8]. Experimental evidence suggests that postinjury treatment with progesterone decreases brain edema, attenuates free radical damage, and reduces neuronal loss in TBI animal models [8-13]. Progesterone also reduces the inflammatory response and attenuates neurological abnormalities after ischemia and spinal cord injury [14-18]. In a recently published controlled study of progesterone, Wright and colleagues conducted a phase II, randomized, double-blind, placebo-controlled trial to assess the safety and benefit of administering progesterone to patients with acute TBI [19]. No serious adverse events were found in the 77 patients who received progesterone, and the patients with moderate TBI who received progesterone were more likely to have a moderate to good outcome than those were randomized to placebo at 30 days post injury. The 30-day mortality in the progesterone group was less than one-half that of the control group. This outcome suggests that progesterone causes no harms and may be a beneficial treatment for TBI [19]. Despite these potential advantages and the good safety profile of progesterone described in studies utilizing animals or humans as subjects, there is relatively little information available from assessing neuroprotective properties of progesterone in the patients with acute severe brain trauma. The effects of progesterone on neurological outcome of the TBI patients remain unclear. The purpose of the present pilot clinical study was to assess the longer-term efficacy of progesterone on improving the neurological outcome of patients with acute severe TBI.\n\nBODY.MATERIALS AND METHODS.PATIENTS:\nPatients with acute severe TBI and a Glasgow Coma Scale (GCS) score ≤ 8 after resuscitation and stabilization were entered into the study. Two hundred and thirty patients from the Neurotrauma Center of our teaching hospital were included. Male or female patients between the ages of 18 and 65 years were studied. The patients received progesterone within 8 hours after the documented time of injury. All patients admitted to the Neurotrauma Center, Clinical Medical College of Hangzhou between March 2004 and February 2007 were consecutively eligible for enrollment. We excluded patients who had received any investigational drugs 30 days prior to the enrollment, such as progesterone, estrogen and investigational compound, patients with severe anoxic intracerebral damage or brain death, and patients whose clinical condition was unstable (partial pressure of oxygen < 60 mmHg or a systolic blood pressure < 90 mmHg, or both). We also excluded pregnant patients and lactating female patients, and those for whom there was doubt whether the neurological status resulted from head trauma or acute or chronic spinal cord injury. The study was conducted in compliance with the clinical protocol approved by the Institutional Review Board and the ethical committees of Clinical Medical College of Hangzhou, according to Good Clinical Practice standards. Because of the nature of patients' injuries, subjects in this clinical study were incapable of granting informed consent. Investigators therefore obtained informed consent from the subject's legal guardian or health proxy before administering the drug. Given the urgent circumstances, we were unable to obtain permission from a legal guardian or health proxy within the stipulated time window for some patients (n = 53). Investigators therefore sought approval from the Institutional Review Board to use deferred consent. If the Institutional Review Board determined that these regulatory criteria were satisfied, the investigators were permitted to enroll subjects without consent. When the drug was administered without proxy consent, the Institutional Review Board was notified within 2 working days. We continued to try to contact the proxy consent after drug administration, and documented those attempts to the Institutional Review Board. Once contacted, the family or legally authorized representative was notified of the patient's enrollment and asked to provide written approval for the patient's continued participation. If attempts to contact proxy consent were unsuccessful, or if the patient died before the family could be contacted, we notified the Institutional Review Board and placed a full report in the patient's record and study file.\n\nBODY.MATERIALS AND METHODS.STANDARD CLINICAL MANAGEMENT:\nAfter head computerized tomography scanning, the patients were delivered to the neurosurgical intensive care unit of the teaching hospital immediately or following surgical evacuation of an intracranial hematoma. All patients received the standard treatment for management of severe TBI based on the guidelines for the management of severe head injury of the American Association of Neurologic Surgeons [20]. Particular emphasis was placed on the prevention and treatment of secondary insults, the avoidance of intracranial hypertension, maintenance of a normovolemic state as well as normothermia and normoglycemia, with ventilation to maintain the oxygen pressure at a minimum of 100 mmHg and the carbon dioxide pressure at approximately 35 mmHg.\n\nBODY.MATERIALS AND METHODS.RANDOMIZATION AND MEDICATION ADMINISTRATION:\nThe prospective, randomized, placebo-controlled, double-blind study was conducted in our neurosurgical intensive care unit. Subjects enrolled in the study were randomized to receive either progesterone (Tianjing Jinyao Pharmaceutical Co. Ltd, Tianjing, China) or matching placebo within 8 hours of the documented time of injury. Qualifying patients were randomly assigned in a 1:1 manner to receive the matching treatment with random numbers. Patients for the treatment group were given progesterone at 1.0 mg/kg via intramuscular injection and then once per 12 hours for 5 consecutive days. A single-dosage volume equivalent to 0.05 ml/kg was used in each subject. In a double-blind manner, progesterone and placebo were supplied via identical-looking solutions in identical glass vials with or without progesterone. The appearance, packaging and administration of placebo and progesterone injections were the same for the two groups. All patients, treating physicians, nursing staff, and pharmacists were blinded throughout the study period.\n\nBODY.MATERIALS AND METHODS.CLINICAL MEASUREMENTS:\nThe ICP was monitored continuously using ICP monitoring apparatus (CAMINO. MPM-1; Integra Co., San Diego, CA, USA). A computerized tomography scan was obtained in all patients at admission and this was categorized according to the modified Marshall computerized tomography scan classification: I, intracranial pathology not visible on the computerized tomography scan; II, cisterns present with shift ≤ 5 mm; lesions present, but no high-density or mixed-density lesions > 25 cm3, with bone fragments and foreign bodies; III, cisterns compressed or absent, shift ≤ 5 mm, with no high-density or mixed-density lesions > 25 cm3; IV, shift > 5 mm, with no high-density or mixed-density lesions >25 cm3; V, any surgically evacuated lesion; and VI, high-density or mixed-density lesions >25 cm3 without surgical evacuation. The patient's condition – body temperature, heart rate and respiratory rate, blood pressure, and pulse blood oxygen saturation – was monitored continuously at the bedside with monitoring apparatus (Hewlett-Packard, Palo Alto, CA, USA). Daily evaluations of neurologic status over the initial 14-day period were performed via the GCS score, adverse experiences, surgical procedures, and intracranial complications. Intake and output of fluids were also recorded. Laboratory tests including hematology, the coagulation profile and clinical chemistry were performed daily and then for 1 week after injury. A urine pregnancy test was performed at enrollment for female patients (as necessary).\n\nBODY.MATERIALS AND METHODS.NEUROLOGIC OUTCOME MEASUREMENTS:\nThe neurologic outcome was evaluated according to the Glasgow Outcome Scale (GOS) score, which contains five levels of outcome: good recovery, moderate disability, severe disability, vegetative survival, or death. For statistical analysis, GOS scores were dichotomized into favorable or unfavorable outcomes. Patients in the upper two GOS outcome groups (good recovery and moderate disability) were considered of favorable outcome, and patients in the other groups (severe disability, vegetative state, or death) were considered of unfavorable outcome. Secondary efficacy endpoints were the modified Functional Independence Measure (FIM) score and mortality. Based on previous reports [21,22], the modified FIM measurements of disability in three areas of activity (domains of self-care, motor function, and cognitive function) were chosen from the 18 items in the full FIM. Each of three items (expression, feeding, and locomotion) includes four possible levels of function ranging from total dependence (1) to independence (4). The total modified FIM scores therefore ranged from 3 to 12. The patients were assessed using the same measures both at 3 and 6 months in the follow-up protocol.\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nDescriptive statistics, including proportions, means and standard deviations, were compiled for all demographic and outcome measures. Demographic and clinical data were analyzed using Fisher's exact test. The statistical analyses were conducted to assess the differences between the treatment group and the control group on specific variables. Statistical analysis was performed using contingency analysis (chi-squared) for categorical data and Student's t test for continuous data. P < 0.05 was considered statistically significant. SPSS 11.0 software package (SPSS Inc., Chicago IL, USA) was used for statistical analysis.\n\nBODY.RESULTS.PATIENTS:\nBetween March 2004 and February 2007, a total of 230 patients were screened in the present study. Of these, 159 patients meeting the protocol stipulation and condition were recruited and randomized to receive either progesterone (n = 82) or placebo (n = 77). Data were available for 154 patients (96%) at the 3-month follow-up and for 135 patients (84%) at the 6-month follow-up. Nineteen patients (11%) were lost to follow-up, three patients (1%) refused follow-up, and two patients (1%) withdrew from the trial. No subjects were enrolled in violation of the protocol stipulations (Figure 1). Figure 1Trial profile. The demographics of the progesterone and placebo groups are presented in Table 1. The cohorts were well balanced with no significant differences between the two groups. The medication history of patients, medication administration, and medical procedures were not significantly different among treatment groups. Table 1 Clinical and demographic characteristics between the two groups Admission characteristic Placebo (n = 77) Progesterone (n = 82) P value Males 57 (74) 58 (70) 0.64 Females 20 (25) 24 (29) 0.64 Mean (standard deviation) age (years) 31 (9) 30 (11) 0.52 Mean (standard deviation) time injury to administration (hours) 3.65 (1.46) 3.80 (2.03) 0.59 Mean (standard deviation) qualifying Glasgow Coma Scale score 6.1 (1.3) 6.0 (1.8) 0.68 Glasgow Coma Scale 3 to 5 20 (25) 22 (26) 0.90 Mechanism of injury Motor vehicle 62 (80) 63 (76) 0.57 Fall 8 (10) 10 (12) 0.71 Assault 4 (5) 7 (8) 0.40 Other 3 (3) 2 (2) 0.59 Surgical procedures 24 (31) 22 (26) 0.54 Pupillary response Bilaterally normal 26 (33) 26 (31) 0.78 Abnormal 51 (66) 56 (68) 0.78 Marshall computerized tomography scan classification I 0 0 II 9 (11) 7 (8) 0.50 III 22 (28) 28 (34) 0.44 IV 13 (16) 12 (14) 0.69 V 24 (31) 22 (26) 0.54 VI 9 (11) 13 (15) 0.44 Data presented as n (%) unless indicated otherwise. \n\nBODY.RESULTS.GLASGOW OUTCOME SCALE SCORES:\nThe 3-month and 6-month GOS scores between the progesterone and placebo groups are summarized in Table 2. There was a better recovery rate for the patients who were given progesterone than for those in the control group at 3-month follow-up (P = 0.044). A dichotomized analysis revealed significant differences in neurologic outcome between the treatment and control groups (Figure 2). The analysis using the dichotomization of GOS scores at 3 months post injury revealed a favorable outcome in 47% of the patients receiving progesterone and in 31% of the placebo group (P = 0.034). There was an unfavorable outcome in 53% of the patients receiving progesterone and in 70% of the placebo group (P = 0.022). At 6-month follow-up, the dichotomized GOS scores also showed a significant statistical difference between the two groups, similar to those 3 months after injury. The percentage of favorable outcome was 58% for the patients who were given progesterone and was 42% in the placebo group (P = 0.048). Forty-one percent of patients who were given progesterone and 57% of the placebo group exhibited an unfavorable outcome (P = 0.048). Table 2 Comparison of Glasgow Outcome Scale scores between the progesterone and placebo groups patients at 3-month and 6-month follow-up Glasgow Outcome Scale scores Progesterone (n = 82) Placebo (n = 77) 3 months Good recovery 21 (25) 10 (12) Moderate disability 18 (21) 14 (18) Severe disability 16 (19) 13 (16) Vegetative survival 13 (15) 16 (20) Death 15 (18) 25 (32) 6 months Good recovery 26 (31) 19 (24) Moderate disability 22 (26) 14 (18) Severe disability 9 (10) 11 (14) Vegetative survival 10 (12) 8 (10) Death 15 (18) 25 (32) Data presented as n (%). Figure 2Dichotomized Glasgow Outcome Scale scores for patients receiving either progesterone or placebo. There was a remarkably more favorable outcome among patients who were given progesterone compared with patients receiving placebo (P = 0.034) 3 months postinjury. At 6-month follow-up, the significant difference in the dichotomization of Glasgow Outcome Scale scores between the progesterone and placebo groups was similar to that after three-month injury (P = 0.048). Subgroup analysis for women also showed a significant difference in the percentage of favorable outcome between the two groups at 6-month follow-up (35% in the placebo group and 66% in the progesterone group, P = 0.036). The patients who were given progesterone in the group with GCS of 6 to 8 showed a more favorable outcome (43%) compared with the placebo group (28%) at 6-month follow-up (P = 0.044). There was no significant difference, however, in dichotomized outcomes in the group with GCS of 3 to 5 (P > 0.05).\n\nBODY.RESULTS.MODIFIED FUNCTIONAL INDEPENDENCE MEASURE SCORES:\nFigure 3 shows the modified FIM scores at 3-month and 6-month follow-up. There was a significant difference in the mean modified FIM score between two groups both at 3-month and 6-month follow-up. At the 3-month follow-up, the scores were 7.35 ± 1.89 for the placebo group and 8.02 ± 1.73 for the progesterone group (P < 0.05). Six months after injury, the placebo group showed a score of 8.95 ± 1.05 and the progesterone group presented 9.87 ± 1.17 (P < 0.01), suggesting good functional outcome in the patients treated with progesterone. Figure 3Modified Functional Independence Measure scores for patients receiving either progesterone or placebo. Modified Functional Independence Measure (FIM) scores at 3-month and 6-month follow-up from patients receiving either progesterone or placebo show that the scores in the progesterone group were significantly higher than those in the placebo group at both 3-month and 6-month follow-up. Data expressed as the mean ± standard deviation. Different from the placebo group: *P < 0.05, **P < 0.01.\n\nBODY.RESULTS.MORTALITY:\nDuring the 6 months of follow-up, a total of 40 patients (25%) died in the present study (37 patients died during their hospital stay). Seventy percent of deaths occurred within 1 week after trauma. Mortality was attributed to the heavy head injury in each case. The mortality rate in the progesterone treatment group was significantly lower at 6-month follow-up compared with the placebo group (18% versus 32%, P = 0.039).\n\nBODY.RESULTS.INTRACRANIAL PRESSURE:\nFigure 4 shows the ICP in the progesterone group patients and in the placebo group patients at 24 hours, 72 hours and 7 days after injury. The ICP was monitored continuously for 75 patients (47%), 40 in the progesterone group and 35 in the placebo group. The mean ICP shows no apparent difference at 24 hours after trauma between the two groups (progesterone group, 22.1 ± 4.3 mmHg versus placebo group, 23.2 ± 4.6 mmHg; P = 0.121). At 72 hours and 7 days after injury, the mean ICP of patients who were given progesterone was slightly lower than those of patients who received placebo, but the differences were not statistically significant (16.9 ± 3.8 mmHg and 14.8 ± 3.8 mmHg for progesterone-treated patients versus 18.2 ± 5.1 mmHg and 15.9 ± 4.1 mmHg for placebo-treated patients, respectively; P > 0.05). Figure 4Comparison of intracranial pressure between patients receiving either progesterone or placebo. The mean intracranial pressure between the progesterone and placebo group patients shows no significant differences 24 hours, 72 hours and 7 days after injury between the two groups (P > 0.05). Data expressed as the mean ± standard deviation.\n\nBODY.RESULTS.GLASGOW COMA SCALE SCORES AND CLINICAL MEASUREMENTS:\nThe mean GCS scores increased progressively in the two groups during the 14-day acute phase of the study, with no apparent differences among the treatment groups. Meanwhile, there was no obvious difference in average body temperature, heart and respiratory rates, blood pressure, pulse blood oxygen saturation, and laboratory testing between the progesterone and placebo groups.\n\nBODY.RESULTS.COMPLICATIONS AND ADVERSE EVENTS:\nProgesterone was well tolerated in the treated patients with acute severe TBI. No complication and adverse event associated with the administration of progesterone was found in this clinical study during the hospitalization periods.\n\nBODY.DISCUSSION:\nThe results of the present trial showed for the first time that progesterone administration had a longer-term efficacy on clinical outcomes in acute TBI patients. A significant increase in the proportion of patients with a favorable outcome in the progesterone group compared with the placebo group up to 6 months indicates the possibility of progesterone for treatment of acute TBI. Moreover, there were more surviving TBI patients in the treatment group than in the control group. Our results suggest the efficacy of progesterone in the treatment of acute severe TBI. Previous reports showed the evidence of efficacy in TBI animal models [8-14]. In the present study, the efficacy and safety of progesterone was confirmed in patients with acute severe TBI. Furthermore, our results using the modified FIM and GOS scores showed that progesterone administration remarkably enhanced functional recovery 6 months after injury and reduced the mortality of the patients with acute severe TBI (GCS = 6 to 8), although there was no statistical significance in the outcome improvement for GCS = 3 to 5 patients with and without progesterone treatment. The evidence of improved outcome for women patients also suggested, in part, a beneficial efficacy and feasibility of progesterone in women with TBI, in spite of the limited number of female patients in the trial. It is recognized that the pathophysiology of TBI is a multifactorial process involved in a complex and interwoven series of pathologic process following the onset of insult, such as increased extracellular glutamate concentrations, increased intracellular Ca2+, free radical overproduction and exacerbated inflammatory response. Medication targeted at a pathological single injury factor could therefore not sufficiently recover functional deficits following TBI. The ideal drugs should be able to block multiple cellular events leading to brain damage following TBI. Neuroprotective strategies currently focus on acting on only one of the mechanisms. Some efforts have been made, however, to combine agents or interventions to increase the probability of success in this setting [23,24]. Nevertheless, the use of a single pharmacologic agent or procedure to slow or block damaging chemicals that are released after brain injury is highly desirable. Progesterone has several features that make it an attractive potential drug candidate for TBI. First, progesterone could protect against brain damage via multiple mechanisms[13,15-18]. The pharmacokinetics of progesterone and its pattern of adverse reactions are well known since the drug has been safely used for a long time [25,26]. Second, with a wide therapeutic window of progesterone, a single bolus given up to 24 hours post injury may significantly reduce cerebral edema [7]. Third, progesterone may rapidly cross the blood–brain barrier and reach equilibrium with the plasma within 1 hour of administration [27-29]. Administration of progesterone soon after TBI would probably benefit the recovery of the patient. In the present double-blind trial, progesterone or placebo was dissolved in the same camellia oil and taken daily for 5 days by patients with acute TBI. Those patients administered progesterone experienced significant improvements in functioning outcome, indicating neuroprotective properties of progesterone in acute severe TBI. There was no adverse event after administration of progesterone and no further late toxicity up to 6 months in the trial. Goss and colleagues suggested that low and moderate doses of progesterone (4 to 8 mg/kg) were optimal for facilitating recovery of select behaviors, and that postinjury progesterone treatment permitted a wider dose range than preinjury treatment in rats with medial frontal cortical contusions [30]. In addition, 5 days of postinjury progesterone treatment are needed to reduce significantly the neuropathological and behavioral abnormalities found in a rodent model of TBI [13]. Wright and colleagues used intravenous progesterone at a dose of 0.71 mg/kg, followed by 0.5 mg/kg progesterone per 12 hours during the 3 following days, which appeared safe in the treatment of TBI patients [19]. In our study, the patients were received a single intramuscular injection of 1.0 mg/kg progesterone and the same dose per 12 hours for 5 consecutive days. The results in our trial showed that single higher-dose progesterone as protective therapy did not lead to any serious side effects. No obvious symptoms of hormone reaction were observed in our study. Accordingly, it can be anticipated that progesterone may be a promising treatment for severe TBI patients as it is inexpensive, widely available and has a long track record of safe use in humans to treat other diseases. The data in the present study provide very encouraging and favorable conditions that could lead to the assessment of GOS and FIM scores in TBI patients in a clinical trial. The GOS score, although useful, provides only a global assessment of function and dependence; it may not differentiate specific difference in cognitive function, motor function, or daily activities. The modified FIM score selects only three items from the 18-item score, and also distinguishes only four (as opposed to seven) levels of function. Subtle or complex deficiencies, particularly in cognitive function, may not have been identified in the dataset. A deficiency in using any one scale to measure outcome is that it is limited in its scope of measurement. The present clinical study was therefore designed to evaluated functional outcome according to the GOS and the modified FIM score. Intracranial hypertension has been considered an important factor affecting the outcome of the patients with acute severe TBI. Progesterone administration showed to decrease cerebral edema [9]. In an experimental study with male rats, there was a linear correlation between the serum progesterone level and brain edema after experimental TBI. The higher the serum progesterone level, the lower the cerebral edema [31]. In the current trial, however, no statistically significant difference was found in ICP monitoring between the groups given progesterone or placebo. It seems that progesterone treatment has little effect on directly reducing the ICP of patients with acute severe TBI. As a result of randomization, all of these parameters were homogeneous between the progesterone and placebo groups in our clinical trial. Nevertheless, some limitations are observed in the current study. The results could be influenced by a single-center trial and local perioperative standard of care. Therefore, it is necessary to use a sufficient power to assess progesterone's effects on neurologic outcomes. Our result of the significant differences in outcomes between two groups of patients emphasizes the potential value of using GOS and FIM to tailor progesterone administration and the likelihood of observing similar differences in a larger patient population; however, the possibility exists that a statistical error may have occurred because of an inadequate sample size. Further studies are needed to determine the mechanisms of action underlying the neurologic effect observed.\n\nBODY.CONCLUSION:\nThe present pilot study indicated that the use of progesterone may significantly improve neurologic outcome of patients suffering severe TBI up to 6 months after injury, providing a potential benefit to the treatment of acute severe TBI patients. Our results strongly support further large, multicenter clinical trials to examine the ways in which progesterone is achieving the profound neurologic effect and to decipher optimal conditions in which it can be used to lengthen the duration of and improve the degree of neuroprotection.\n\nBODY.KEY MESSAGES:\n• Many neuroprotective agents have been shown to be efficient on TBI in animal models, and there is no single agent that shows improvement in outcome for head injury patients. • A number of experimental models have suggested that administration of progesterone has a potential benefit for head injury. • The present clinical trial reveals that progesterone may be used as a potential safe drug for the treatment of acute severe head trauma patients.\n\nBODY.ABBREVIATIONS:\nFIM = Functional Independence Measure; GCS = Glasgow Coma Scale; GOS = Glasgow Outcome Scale; ICP = intracranial pressure; TBI = traumatic brain injury.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nGMX and WQY participated in the trial design and were involved in the study analysis and summary. GMX and WMW obtained the data. GMX, JW, ZHL and WMW participated in the data analysis and interpretation of the results. All authors reviewed the final version.\n\n**Question:** Compared to placebo what was the result of progesterone on The analysis using the dichotomization of GOS scores at 3 months post injury?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
261
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Impact and Process Evaluation of Integrated Community and Clinic-Based HIV-1 Control: A Cluster-Randomised Trial in Eastern Zimbabwe\n\n ABSTRACT.BACKGROUND:\nHIV-1 control in sub-Saharan Africa requires cost-effective and sustainable programmes that promote behaviour change and reduce cofactor sexually transmitted infections (STIs) at the population and individual levels.\n\nABSTRACT.METHODS AND FINDINGS:\nWe measured the feasibility of community-based peer education, free condom distribution, income-generating projects, and clinic-based STI treatment and counselling services and evaluated their impact on the incidence of HIV-1 measured over a 3-y period in a cluster-randomised controlled trial in eastern Zimbabwe. Analysis of primary outcomes was on an intention-to-treat basis. The income-generating projects proved impossible to implement in the prevailing economic climate. Despite greater programme activity and knowledge in the intervention communities, the incidence rate ratio of HIV-1 was 1.27 (95% confidence interval [CI] 0.92–1.75) compared to the control communities. No evidence was found for reduced incidence of self-reported STI symptoms or high-risk sexual behaviour in the intervention communities. Males who attended programme meetings had lower HIV-1 incidence (incidence rate ratio 0.48, 95% CI 0.24–0.98), and fewer men who attended programme meetings reported unprotected sex with casual partners (odds ratio 0.45, 95% CI 0.28–0.75). More male STI patients in the intervention communities reported cessation of symptoms (odds ratio 2.49, 95% CI 1.21–5.12).\n\nABSTRACT.CONCLUSIONS:\nIntegrated peer education, condom distribution, and syndromic STI management did not reduce population-level HIV-1 incidence in a declining epidemic, despite reducing HIV-1 incidence in the immediate male target group. Our results highlight the need to assess the community-level impact of interventions that are effective amongst targeted population sub-groups.\n\nBODY.INTRODUCTION:\nHIV-1–prevalence declines may now be occurring in some sub-Saharan African countries [1]. However, there remains little direct evidence that prevention measures—rather than natural HIV-1 epidemic dynamics [2] or behaviour change prompted by mortality [3]—have contributed to the slowing of HIV-1 epidemics [4,5]. Syndromic management of sexually transmitted infections (STIs) proved effective early in an HIV-1 epidemic in north-west Tanzania [6]. Peer education to promote safe behaviours showed promise in early process evaluations [7], but a randomised controlled trial (RCT) of factory workers in Harare, Zimbabwe, done in the mid-1990s, proved inconclusive [8]. Subsequent RCTs of syndromic management [9] and mass treatment of STIs [10], together with an information, education, and communication (IEC) behaviour-change programme [9], showed no effect in more mature epidemics. Integrated implementation of synergistic community-based HIV-1 control strategies could be a more cost-effective and sustainable approach to HIV-1 prevention than parallel application of vertical (top-down) programmes [11]. One scientific evaluation of such a strategy has been reported in which a combination of IEC activities amongst the general population and syndromic STI management showed no impact on HIV-1 incidence at the population level [9], although participation in the IEC activities was associated with reduced HIV-1 infection in women [12]. We conducted a cluster-RCT to test the hypothesis that integrated implementation of combined community- and clinic-based HIV-1 prevention, in which IEC activities focus primarily on high-risk populations, can be feasible and effective in reducing HIV-1 incidence in a major maturing epidemic in eastern Zimbabwe (Protocols S1 and S2; Text S1 and S2).\n\nBODY.METHODS.PARTICIPANTS AND RANDOMISATION PROCEDURE:\nThe study communities comprised six pairs of communities matched by socio-economic type—small town, tea/coffee estate, forestry plantation, roadside trading settlement, and subsistence farming area (two pairs) (Figure 1). Each community included at least one Government or Mission health centre. It was anticipated that HIV-1 incidence would be similar within each pair of communities. Within each pair, one community was assigned at random (un-blinded coin toss by a Ministry of Health official witnessed by programme and research personnel) to receive the additional intervention and the other to be the control. These procedures were designed to ensure that Mission, non-governmental organisation, and private sector programmes (for details, please refer to the following section) would be distributed evenly between intervention and control sites. Figure 1Location of Intervention and Control Communities in Manicaland Province, Eastern Zimbabwe We assessed the effect of the intervention using results from laboratory tests for HIV-1 infection and questionnaire data collected in the baseline and 3-y follow-up rounds of a population-based, closed-cohort survey. The 12 study communities were enumerated in a phased manner, with paired communities being enumerated consecutively to minimise the effects of any seasonal factors. HIV-1–prevention activities were commenced in each intervention community shortly after completion of the baseline survey in that community. In each community, individuals eligible for the study were identified in the first round using data from household listings prepared in an initial census. All males and females aged 17–54 y and 15–44 y at last birthday (the age groups expected to have the highest incidence of HIV infection), respectively, who had slept in a household in the community for at least four nights in the previous month, and who had also done so at the same time 1 y earlier, were considered eligible for the study. In heterosexually driven HIV-1 epidemics, risk of infection can be correlated amongst marital partners [13]. Therefore, to maximise statistical power to detect differences in HIV-1 incidence, enrolment was restricted to one randomly selected member per marital group.\n\nBODY.METHODS.INTERVENTIONS:\nIntervention and control communities were to receive standard Government services including basic syndromic STI management, condom distribution from health clinics and Zimbabwe National Family Planning Council outlets, home-based care, and limited HIV/AIDS–focussed IEC activities (e.g., occasional AIDS-awareness meetings and distribution of posters and leaflets). In addition, social marketing of male and female condoms would be provided through an ongoing national programme [14]. The intervention comprised targeted and population-level strategies to promote safer sexual behaviour and to improve treatment of STIs that facilitate HIV-1 transmission. The intervention strategies were implemented by two local non-governmental organisations (Family AIDS Caring Trust and the Biomedical Research and Training Institute) and the Zimbabwe Ministry of Health and Child Welfare through an integrated programme of community- and clinic-based activities. Integration of the individual programme components was achieved through the joint involvement of the participating agencies in the planning and implementation of activities and through the inclusion of biomedical and behavioural aspects within each component. The programme design comprised three key components: (1) peer education and condom distribution amongst commercial sex workers and male clients at workplaces and in the general community, supported by income-generating projects; (2) strengthened syndromic management of STI services at local health centres; and (3) open days with HIV/AIDS IEC activities at health centres to promote safer sexual behaviour and to increase the uptake of local STI treatment services. The peer-education component was based on a model which had been developed by the Project Support Group at the University of Zimbabwe [7] and which had been widely implemented within Zimbabwe and neighbouring countries. Activities were held weekly at workplaces and at locations within the general community (e.g., beer halls and markets) where casual relationships were most frequently formed [15]. The target population comprised sex workers and male clients who form a bridge population in HIV transmission [16] between sex workers and the monogamous (or serial monogamous) majority of women [17,18]. It was posited that the high HIV-1 incidence observed amongst young women could be reduced by altering the behaviour of their older male partners whose own behaviour was intrinsically more risky [19]. The behavioural component would be reinforced in counselling sessions with STI patients and through micro-credit income-generating projects to reduce unmarried women's dependence on commercial sex work. The micro-credit scheme consisted of small interest-free loans repayable over 10 mo, provided to groups and to individuals together with training in small-business management. The targeted activities would be extended to the general population through open days held at local health centres. Besides providing basic HIV/AIDS information, it was envisaged that programme meetings and activities, by their continuous nature, would sustain high levels of awareness of the risks of HIV transmission and would facilitate renegotiation of community social norms, making safer behaviours easier to adopt. The key messages of the programme were: (1) remain faithful to one regular sexual partner; (2) use condoms consistently with any casual sexual partners; and (3) seek prompt and effective treatment for any STIs. Syndromic management of STIs at primary healthcare centres was first introduced in Zimbabwe in the 1980s [20] and formed the basis of STI diagnosis and treatment services at baseline in the intervention and control communities. It was envisaged that these services could be strengthened and made more effective through a programme of regular classroom training and on-site supervision of nursing staff, through the introduction of training in systemic counselling for STI patients, and through the provision of small quantities of treatment drugs to cover delays in routine supplies. Quality-assurance procedures applied in the intervention communities included pre- and post-training tests for peer educators and, for nursing staff, attending the syndromic STI management and systemic counselling courses, regular on-site supervision (including random spot checks) and training, refresher courses, routine planning meetings and monitoring of service statistics, and quarterly workshops where detailed programme procedures were reviewed and updated. An interim qualitative process evaluation of intervention activities was conducted during the inter-survey period, and a report on the findings was provided to the implementing organisations.\n\nBODY.METHODS.OUTCOME AND PROCESS MEASURES:\nThe primary outcome of the study was HIV-1 incidence at the community level amongst individuals who were uninfected at baseline. Blood was collected onto Whatman No. 3 filter paper and transported to the Biomedical Research and Training Institute laboratory in Harare. Blood spots were air dried at 4 °C and, for long-term (>1 mo) storage, were kept at −20 °C. For baseline studies, blood was eluted into phosphate-buffered saline, and antibodies to HIV were detected using a dipstick dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, http://www.path.org; produced locally in Thailand]) and a standard protocol [21,22]. All positive results and a 10% sample of negative results were confirmed using a plate EIA (Abbott Third-Generation HIV-1/HIV-2 EIA [http://www.abbott.com] or Genelavia MIXT HIV-1/HIV-2 [Sanofi Diagnostics Pasteur, Marnes La Coquette, France]). At follow-up, a similar protocol was followed. Only the samples from those participants recorded as being HIV seronegative at baseline were tested at follow-up, again using a dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, produced locally in India]). Where seroconversion was indicated, the frozen stored baseline sample was retested to confirm the original negative result using the same dot EIA test. Where the baseline result remained negative, the Abbott EIA test was used to confirm both baseline and follow-up results. The change in place of manufacture of the dot EIA and the exclusive use of Abbott test kits to confirm positive sera at follow-up was due only to changes in the supply of test reagents, and not to perceived changes in sensitivity or specificity [23]. Apart from the principal investigators (based in Harare, London and Oxford) and those nurses given permission by participants requesting voluntary counselling and testing (VCT), all research personnel remained blind to the HIV-1 status of individual participants. Secondary outcomes, measured at the community and individual level, were self-reported genital ulcers and urethral or vaginal discharge in the past year (STI cases), STI treatment effectiveness (self-reported cessation of symptoms), indicators of sexual and health-seeking behaviour change, and HIV/AIDS knowledge. The behaviour-change variables assessed were sexual debut, sexual partner change in the past year, non-regular partnerships in the past month, and unprotected sex with regular and casual partners in the past 3 y. The data on sexual partnerships and condom use were collected using the Informal Confidential Voting Interview method for 75% of respondents selected at random in the first round of the survey. This method includes procedures to build rapport, ensure a non-judgemental interview approach, and provide reassurance that there are no right or wrong answers to questions of a personal nature, and uses a simple secret voting-box system to reduce embarrassment and guarantee confidentiality in low-development settings [18]. Its use has been shown to be associated with greater disclosure of socially proscribed behaviour in the study population [24]. Process indicators examined comprised changes in knowledge and psychosocial status and indicators of programme coverage and quality.\n\nBODY.METHODS.SAMPLE-SIZE CALCULATIONS:\nInitial sample-size calculations assumed 20% HIV-1 prevalence at baseline, 30% loss to follow-up after 2 y, and 80% power to detect a 40% reduction in HIV-1 incidence in the intervention communities compared with control communities, assuming a background yearly incidence of 2%. Based on six pairs of communities and a co-efficient of variation between communities of 0.15, the required sample size in each community was 1,000. Funding constraints and slower than anticipated implementation of intervention activities led to revisions of the sample size for each community to 800 and the length of follow-up to 3 y, respectively. Assuming a proportionate increase in loss to follow-up to 41%, these arrangements also yielded 80% power to detect a 40% reduction in HIV-1 incidence.\n\nBODY.METHODS.STATISTICAL METHODS:\nTo test the randomisation with small numbers of communities, HIV-1 prevalence, STI history, and socio-demographic characteristics were compared at baseline for study participants in the intervention and control communities, together with uptake of STI treatment and VCT services offered at baseline. Outcome and process indicators were compared for intervention versus control communities. Analysis of the primary outcome was on an intention-to-treat basis. Incident events and person-years at risk of seroconversion were used to calculate HIV-1 incidence rates and unadjusted and adjusted incidence rate ratios (IRR) with 95% confidence intervals (CIs) for each pair of communities. Adjustment was made for sex, 3-y age group, and community-level baseline HIV prevalence. The overall IRRs (unadjusted and adjusted) were taken to be the geometric means of the IRRs for the six pairs of communities. We calculated 95% CIs for each geometric mean as geometric mean ± 1.96 × standard error of the geometric mean. Paired student t-tests on the logarithms of the pair-specific IRRs were used to test whether these differed significantly from unity [25]. The coefficient of variation between communities was calculated based on baseline HIV prevalence using a standard procedure for pair-matched studies [26]. Analyses of prevalence for secondary outcome and process variables were conducted separately for male and female respondents seen at both survey rounds by fitting logistic regression models to the individual-level data and adjusting for community pair and, where available, value of variable at baseline. Since most programme activities were targeted and overall coverage of programme activities was therefore limited, sub-group analyses, adjusted for community pair, were done for HIV-1 incidence and behavioural outcomes to assess the individual-level effects of attendance at programme meetings. Data were entered and validated using SPSS-PC (http://calcnet.mth.cmich.edu/org/spss/index.htm) and data analysis was conducted in Stata version 7 (http://www.stata.com). Statistical tests were double-sided and results were taken to be significant at the 5% level.\n\nBODY.METHODS.ETHICAL APPROVAL:\nAll study participants in the intervention and control communities were offered free VCT for HIV-1, an information sheet on HIV/AIDS, results from a diagnostic test for Trichomonas vaginalis [27] (done at baseline only), and free treatment for T. vaginalis and other STIs from a research nurse. Testing and treatment for T. vaginalis was provided because the prevalence of other curable STIs was low in the study areas [22]. Antibodies reactive with T. vaginalis were detected in DBS eluates following a previously described procedure [27,28]. Written informed consent was sought as a condition of enrolment and continuation in the study. Prior ethical approval was obtained from the Research Council of Zimbabwe, number 02187; the Applied and Qualitative Research Ethics Committee in Oxford, United Kingdom, N97.039; and the UNAIDS Research Ethics Committee, ERC 98/03.\n\nBODY.RESULTS.PARTICIPANT FLOW:\nIn round 1 of the census (July 1998 to February 2000), 5,943 and 6,037 eligible individuals in the intervention (total population size 18,104) and control (18,633) communities, respectively, were selected for recruitment into the study cohort (Figure 2). In round 2, 3 y later (July 2001 to February 2003), 1,044 (23%) and 1,144 (26%) of baseline respondents who were still alive had migrated away from the intervention and control communities, respectively, and were therefore lost to follow-up (Figure 2). At both baseline and follow-up, migrants and non-migrants had similar risks of HIV-1 infection and associated behaviour [29]. Of those still resident in the intervention and control communities, 2,664 (75%) and 2,564 (77%), respectively, were interviewed and blood samples taken for a second time. Temporary absence from the usual place of residence was the main reason for non-participation in the intervention (n = 794, 95%) and control (n = 698, 94%) communities. The overall proportions of baseline respondents followed up at the end of the study were 55% and 56% in the intervention and control communities, respectively. The median follow-up of communities was 3.0 y (range of median within communities, 3.0–3.1). Figure 2Flow-Chart Comparing Participation and Follow-Up Rates in the Intervention and Control CommunitiesIndividuals enrolled in round 1 and still resident in the study communities were considered eligible for participation in round 2.\n\nBODY.RESULTS.BASELINE DATA:\nHIV-1 prevalence was higher in the intervention communities than in the control communities (24% versus 21%, risk ratio 1.13 [95% CI 1.05–1.22], p = 0.001). T. vaginalis infection, secondary school education, and spatial mobility were more common in the control communities, whilst history of genital discharge and uptake of STI treatment and VCT services offered in the survey were low overall but more frequent in the intervention communities (Table 1). However, the differences in each case were small and were unlikely to be clinically meaningful. Table 1 Baseline Characteristics of the Study Populations \n\nBODY.RESULTS.OUTCOMES AND ESTIMATION:\nMedian follow-up per person was 2.9 y (range 1.4–3.9) and 3.0 y (range 1.5–4.1) in the intervention and control communities, respectively. In total, 4,052 individuals had 212 incident events of HIV-1 during 12,009 person-years at risk, giving an HIV-1 incidence rate of 1.77 per 100 person-years at risk. HIV-1 incidence was higher in communities with higher baseline HIV prevalence (IRR 11.49 [95% CI 1.80–73.40], p = 0.010), but this difference disappeared after adjustment for stratification by community type (p = 0.8). HIV-1 incidence was higher in the intervention communities than in the control communities overall, and in each community type, except in the forestry plantations where it was almost identical (Table 2). The difference was not significant after adjustment for sex, age group, and baseline HIV prevalence (IRR 1.27 [95% CI 0.92–1.75], p = 0.012). The observed coefficient of between-community variation was 0.14. Table 2 HIV Prevalence at Baseline and HIV Incidence and IRRs for Intervention Versus Control Communities Looking at outcome indicators for community members (rather than for communities—the unit of randomisation), self-reported STI symptoms were similar in both sets of communities (Table 3). Treatment for STI symptoms in males was effective more frequently in the intervention communities, with men in the intervention community in five of the six matched pairs reporting reduced symptom recurrence. However, more young women in the intervention than in the control communities had started sex, and reports of unprotected sex with a casual partner in the study period were more common in the intervention communities. No differences were observed in consistent condom use with regular partners between the two sets of communities. In the intervention communities, knowledge about HIV/AIDS was enhanced amongst men, and more respondents reported a close relative or family member with AIDS (sex- and age-adjusted prevalence odds ratio 1.22 [95% CI 1.05–1.42], p = 0.009). Slightly more women in the intervention communities reported that condom use within marriage was becoming acceptable, but a greater proportion of men agreed with the statement that \"condoms reduce the pleasure of sex\". Table 3 Biomedical, Sexual Behaviour, and Psychological Outcomes at Follow-up by Residence in the Intervention and Control Communities A total of 63,261 peer-education meetings were held, and 6.8 million condoms were distributed by the programme in the intervention communities (Table 4). Outputs increased over time as new communities entered the programme. However, owing to high inflation and economic decline, the micro-credit income-generating projects proved impossible to implement. We were able to obtain data on STI episodes treated at clinics in the 11 out of 12 study communities that reported cases to the administrative districts of Mutasa and Makoni. In the three intervention communities each in Mutasa and Makoni, STI cases fell by 66% and 51%, respectively, over the 3-y study period. Similar declines of 67% and 52% occurred at clinics in the four control communities in Mutasa and the one control community in Makoni. Coverage of training in syndromic STI management and systemic counselling for nursing staff was high (Table 4). Table 4 Summary of Service Statistics on Programme Output Most of the activities were targeted at high-risk groups. In the general population sample interviewed in the follow-up survey, 1,779 (35%) and 647 (13%) of 5,098 respondents reported attending an HIV/AIDS meeting and a programme meeting, respectively (Table 5). More respondents in the intervention communities than in the control communities attended an HIV/AIDS meeting (41% versus 28%, prevalence rate ratio 1.44 [95% CI 1.33–1.56], p < 0.001) and a programme meeting (20% versus 5%, 4.27 [95% CI 3.52–5.17], p < 0.001), and participation was higher among men than women (prevalence rate ratio 1.32 [95% CI 1.14–1.53], p = 0.002). Fewer women in the intervention communities had heard about HIV/AIDS from external sources or believed that STI drugs were available at their local clinics. Sixty-two (2%) out of 2,528 respondents in the control communities reported spending at least 1 d in the past month in the intervention communities; the equivalent number for respondents in the intervention communities visiting control communities was 70 (3%) out of 2,683. Table 5 Intervention Coverage \n\nBODY.RESULTS.ANCILLARY ANALYSES:\nIn exploratory analysis to assess where the intervention failed, we found that HIV-1 incidence was reduced in males (IRR 0.48 [95% CI 0.24–0.98], p = 0.044) who reported attending programme meetings, after adjustment for the targeting of activities to groups with high-risk behaviour (Table 6). Amongst men who reported one or more casual sexual partners in the past 3 y, fewer of those who attended meetings reported unprotected sex with these partners (prevalence odds ratio 0.45 [95% CI 0.27–0.75], p = 0.002). HIV-1 incidence was not associated with programme participation in women. Table 6 HIV Incidence and IRRs by Meeting Attendance and Sex \n\nBODY.DISCUSSION.INTERPRETATION:\nWe conducted a scientific trial of the feasibility and impact of an integrated community- and clinic-based HIV-1–prevention intervention. The income-generating projects apart, the intervention activities were feasible. The outputs of the programme were extensive with more than 63,000 meetings being conducted and almost 7 million condoms distributed by trained peer educators. Programme messages were considered relevant and realistic. Local STI treatment and counselling services were strengthened and promoted in accordance with the intervention protocol. For male participants, these activities improved HIV/AIDS knowledge, increased the effectiveness of STI treatment, increased consistent condom use with casual partners, and reduced HIV-1 incidence. However, the cluster-RCT results clearly show that the intervention had no positive impact at the community level and suggest possible detrimental effects on the onset of female sexual activity and condom use with casual partners over a 3-y timeframe. Did the cluster-RCT design fail to capture the true effect of the intervention? There are three possibilities: (1) inadequate statistical power; (2) insufficient follow-up; and (3) contamination of intervention within control communities. The study design provided adequate statistical power to detect a meaningful average reduction (40%) in HIV-1 incidence in the intervention versus the control communities over a 3-y observation period. In hindsight, an effect size of 40% was too optimistic and the study had insufficient power to detect a smaller effect. However, there was no trend in the results towards reduced HIV-1 incidence in the intervention communities. Largely due to migration, attrition was close to that anticipated in the study design and was comparable to other recent cohort studies [6,10,9,30]. Migrants had similar characteristics and sexual behaviour to non-migrants [29]. The results of the exploratory sub-group analysis generate the hypothesis that high-risk behaviour was reduced in males attending programme meetings but did not translate into a wider impact on HIV-1 incidence at the population level. Changes in core and bridge populations may take more time to reflect in the general population than was observed in the trial. However, a longer period of follow-up would have increased attrition, and the finding of a possible adverse effect at the population level meant that it would not have been ethical to continue with the same intervention. Future trials of behaviour-change interventions may need to include multiple rounds with phased recruitment and (where interim results are favourable) may need to consider phased intervention implementation. We minimised intervention contamination by selecting physically separated study communities, and movements between intervention and control communities were rare. However, a similar peer-education programme was implemented in one control community (small town), and HIV-1–prevention activity was considerable in all control communities that also had greater access to information from external sources. In some cases, programme messages (e.g., promotion of condom use) conflicted with those of other agencies working in the intervention communities. The effects of these other programmes could have limited our ability to detect a reduction in HIV-1 incidence caused by the current intervention. The absence of an observed effect of the intervention was not explained by differences in HIV-1 prevalence, sexual behaviour, STI cofactors, mobility, or socio-demographic composition at baseline. The earlier sexual debut in females and greater unprotected sex with casual partners seen in the intervention communities during the study period were not present at baseline but could reflect increased willingness to report high-risk behaviours in settings where there was more open discourse about HIV and AIDS. The peer-education programme could have had some effect for male but not for unmarried female participants. Preliminary findings from subsequent qualitative investigations indicate that, in the predominantly rural communities in which the study was conducted, poverty and the associated failure of income-generating projects meant that some peer educators were unable to maintain safer behaviours. Given their increased visibility within the community—intended to enhance their status and self-esteem and, thus, to reinforce their commitment to and role as models for behaviour change—they may, inadvertently, have served as negative role models and, thereby, may have contributed to the greater female early-age sexual activity. Free distribution of condoms by women still engaging in unprotected commercial sex led to their being poorly valued and reinforced their association with promiscuity.\n\nBODY.DISCUSSION.GENERALISABILITY OF FINDINGS:\nEpidemiological context can affect the impact of interventions [31], and structural obstacles can limit the pace and extent to which activities are implemented and the quality of these activities [32]. The HIV-1 epidemic stabilised in eastern Zimbabwe during the study period, with HIV-1 prevalence declining by 40%–50% in young adults [23]. This decline was accompanied by delayed sexual debut, reduced sexual partner change, and consistent condom use with casual partners [33,23]. Prevalence of syphilis, gonorrhoea, and Chlamydia is low, but non-curable herpes simplex virus type 2 remains common [22]. Risk reduction makes transmission more fragile, and an intervention could have a larger effect when set against secular behavioural changes [2]. Mathematical model simulations suggest that there would also be a greater chance of detecting a significant effect of the intervention even though there would be fewer seroconversions to power the calculation [34,35]. Structural obstacles to intervention implementation included HIV/AIDS mortality which disrupted the programme by claiming the lives of two programme coordinators and several of the nursing staff and peer-educators. Economic decline made the income-generating projects unfeasible and reduced the effectiveness of other components of the intervention. We believe that the coverage of the peer-education programme was satisfactory, given the focus on highly sexually active individuals. Meeting coverage could have been under-estimated in the survey since one-to-one discussions and activities at beer halls and other public places may not have been recognised as meetings by those present. However, the high level of spatial mobility limited the number of people who were reached at the required level of intensity and consistency, whilst national shortages of foreign currency restricted fuel and drug supplies, hampered attempts to extend community activities into the more remote rural areas, and disrupted the STI treatment programme in both the intervention and control communities. The intervention that we evaluated could have greater effect where an HIV-1 epidemic is younger, HIV-1 incidence is greater, local sexual networks are less diffuse, background STI control is weak, herpes simplex virus type 2 is less common, population mobility is lower, and/or the socio-economic climate is stable. We cannot rule out an effect of peer education in the urban intervention community since similar activities were implemented in the control community. Targeted peer education may work better in towns where bar-based sex work is more extensive. The absence of reduced HIV-1 incidence in farming estates reinforces doubts raised by the Harare factory workers study [8] concerning the efficacy of workplace peer education.\n\nBODY.DISCUSSION.OVERALL EVIDENCE:\nThese findings are important since the strategies evaluated—i.e., peer education, condom distribution, and syndromic STI management—are theory-based, have the potential for independent effects [11], and are widely promoted [36,37]. Syndromic STI management was effective in a nascent epidemic [6]. However, our disappointing findings echo those from recent trials [9,12] and emphasise the need for alternative strategies of behaviour-change promotion. Social marketing of condoms [14], larger poverty-alleviation programmes to reduce women's reliance on sex work—based on skills training and careful market research rather than on small-scale income-generating projects—and strategies which reach beyond high-activity core groups, such as the Popular Opinion Leader programme [38,39], and client-centred counselling [40], could be more viable and effective in reducing HIV-1 transmission in rural areas. Given the necessary economic conditions, unmarried women may still play a useful role in bar-based programmes since beer halls remain foci for high-risk behaviour [41,15].\n\nBODY.SUPPORTING INFORMATION.TRIAL REGISTRATION:\nThis trial has the registration number ISRNCT00390949 in the International Standard Randomized Controlled Trial Number Register. Found at: http://www.clinicaltrials.gov/ct/show/NCT00390949?order=1\n Protocol S1Protocol(35 KB DOC)Click here for additional data file. Protocol S2Revisions to Protocol(35 KB DOC)Click here for additional data file. Text S1CONSORT Checklist(48 KB DOC)Click here for additional data file. Text S2Ethical Approval, Information Letter, and Consent Forms(2.8 MB PDF)Click here for additional data file.\n\n**Question:** Compared to Baseline communities that received standard Government services what was the result of Baseline communities that received additional education and preventive measures on Programme meeting attendance?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
387
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of Bladder Training and/or Tolterodine in Female Patients with Overactive Bladder Syndrome: A Prospective, Randomized Study\n\n ABSTRACT:\nWe compared the effects of bladder training and/or tolterodine as first line treatment in female patients with overactive bladder (OAB). One hundred and thirty-nine female patients with OAB were randomized to treatment with bladder training (BT), tolterodine (To, 2 mg twice daily) or both (Co) for 12 weeks. Treatment efficacy was measured by micturition diary, urgency scores and patients' subjective assessment of their bladder condition. Mean frequency and nocturia significantly decreased in all treatment groups, declining 25.9% and 56.1%, respectively, in the BT group; 30.2% and 65.4%, respectively, in the To group; and 33.5% and 66.3%, respectively in the Co group (p<0.05 for each). The decrease in frequency was significantly greater in the Co group than in the BT group (p<0.05). Mean urgency score decreased by 44.8%, 62.2% and 60.2% in the BT, To, and Co groups, respectively, and the improvement was significantly greater in the To and Co groups than in the BT group (p<0.05 for each). Although BT, To and their combination were all effective in controlling OAB symptoms, combination therapy was more effective than either method alone. Tolterodine alone may be instituted as a first-line therapy, but may be more effective when combined with bladder training.\n\nBODY.INTRODUCTION:\nOveractive bladder (OAB), defined as increased frequency and urgency with or without urge incontinence, is a very common condition, especially in middle-aged women, with a reported incidence of approximately 17% of the general female population (1, 2). Since many women may consider OAB as a natural aging phenomenon and thus do not seek treatment, its actual prevalence may be even higher. Despite its common occurrence, however, our understanding (3-5) of the pathophysiology underlying the development of OAB has remained elementary. It is managed most commonly by drug therapy, followed by various behavioral interventions and, less frequently, by surgical methods. Each of these methods, however, has variable efficacy and adverse effects. Classically, urge incontinence has been considered most important, being the key index in treatment outcome assessments. However, since the new consensus by the International Continence Society on the diagnosis of OAB (6), the presence or absence of actual urine leakage was regarded as less important. With the adaptation of this new definition, the number of patients newly diagnosed with OAB is expected to increase, emphasizing the need to develop first-line interventions for their symptoms. The three most commonly employed methods for treating newly-diagnosed OAB are bladder training, anticholinergic therapy and a combination of the two. Here we report a controlled, randomized study comparing the clinical efficacy of these three methods, for the purpose of determining the most appropriate first-line therapeutic modality.\n\nBODY.MATERIALS AND METHODS:\nBetween May 2001 and April 2002, 139 women presenting with symptoms of urgency and frequency, with or without urge incontinence, were prospectively enrolled in this study. Inclusion criteria included being 18 yr of age or older, having a urination frequency of 8 or more times per day, the presence of urge symptoms that may or may not accompany incontinence, symptom duration of 3 months or longer and no prior history of treatment for OAB. We excluded patients with active urinary tract infection, clinically significant stress urinary incontinence, bladder outlet obstruction, interstitial cystitis, glaucoma or megacolon. Also excluded were patients with a maximal urine flow rate of less than 10 mL/sec or a postvoid residual urine amount that was more than 30% of the total amount voided on uroflowmetry. Upon enrollment, patients were randomly assigned to bladder training (BT, n=46), tolterodine treatment (To, 2 mg twice daily, n=47) or a combination of the two (Co, n=46). Each patient received a physical examination and uroflowmetry with postvoid residual urine scan. In addition, each patient kept a record of urgency scores, with 0 indicating no symptoms, 1 rarely, 2 occasionally, 3 often, and 4 all the time, and a frequency-volume chart. At 12 weeks, satisfaction score was assessed, with 0 representing fully satisfied, 1 much improved, 2 slightly improved, 3 no improvement, and 4 worsened; a change of 2 points or more was considered symptom improvement. The frequency-volume chart and a survey of adverse effects of medication were repeated every 4 weeks. Patients in the BT and Co groups were educated of the bladder drills by a nurse specialist. Gross and functional anatomy of the lower urinary tract and pelvis were demonstrated, with instructions on how bladder drills can increase functional bladder capacity and actual voiding habits. Each patient was instructed to examine her frequency-volume chart to determine the longest interval she could initially hold urination and sustain urge symptoms. Each patient was then taught to increase this interval by 15-min increments, with the aim of achieving a 3 to 4 hr interval and a voided volume of 300 to 400 mL. In addition, whenever urgency occurred, patients were educated to perform Kegel exercises for symptom alleviation. All bladder drill education was performed by one nurse specialist, who telephoned each patient every 2 weeks to make sure she was following the exact drill taught. In the Co group, the bladder drill was started at the same time as the start of medication. Of the 139 patients, 89 (64.0%) completed the 12 week treatment regimen and were subjects of outcome analyses. Voiding frequency, nocturia, urgency score before and after the treatment, satisfaction score at 12 weeks and adverse events were measured in the three groups. Student's t-test was used for statistical analysis, and p<0.05 was defined as statistically significant.\n\nBODY.RESULTS:\nPatient characteristics, including age, weight, symptom severity and duration, parity, pretreatment uroflowmetry profile, presence and degree of urge or stress incontinence, and urgency score, were similar among the three treatment groups (Table 1). At 12 weeks, 26 patients (56.5%) in the BT group, 32 (68.1 %) in the To group, and 31 (67.4%) in the Co group had completed the protocol. Mean frequency in the three groups decreased significantly, from 10.9 to 8.1 (25.9%) in the BT group, from 11.6 to 8.1 (30.2%) in the To group, and from 11.9 to 7.9 (33.5%) in the Co group (p<0.05 each) (Fig. 1). Nocturia also decreased significantly in the three groups, from 1.5 to 0.6 per night (56.1%) in the BT group, from 1.7 to 0.6 (65.4%) in the To group, and from 2.0 to 0.6 (66.3%) in the Co group (p<0.05 each) (Fig. 2), as did urgency scores, from 2.6 to 1.4 (44.8%) in the BT group, from 2.8 to 1.1 (62.2%) in the To group, and from 3.0 to 1.2 (60.2%) in the Co group (p<0.05 each) (Fig. 3). When we compared these changes among the three groups, we found that patients in the Co group showed greater improvements in frequency and urgency scores than did patients in the BT group (p<0.05 each), and that urgency scores in the To and Co groups showed significantly greater improvement than in the BT group (p=0.017 and p=0.021, respectively). No difference was observed between the To and Co groups. Satisfaction scores at 12 weeks did not differ among the three groups, being 1.5 in the BT group, with improvement in 53.9%; 1.4 in the To group, with improvement in 63.0%; and 1.3 in the Co group, with improvement in 71.0%. No adverse events were reported from the BT group, whereas 13 patients (40.6%) in the To group and 12 (38.7%) in the Co group reported one or more adverse events (Table 2). Most of the events were mild and well-tolerated, except that severe dry mouth occurred in 2 patients from each group, which led them to discontinue medication (withdrawal rates, 6.3% and 6.5%, respectively).\n\nBODY.DISCUSSION:\nDespite extensive research on the complicated interactions involving the neural network of the cerebrum, sacral cord and the detrusor, little is known regarding the pathologic process that occurs during the development of OAB (4, 5, 7). Accordingly, therapeutic approaches for OAB have been diverse, consisting largely of medical therapy, but also including behavioral interventions such as bladder training, pelvic floor muscle exercises with or without biofeedback, and electrical stimulation. Other therapeutic approaches have included neuromodulation of the sacral nerves, intravesical instillations of antimuscarinics and bladder overdistention, with augmentation cystoplasty remaining as the last resort. Naturally, non-invasive methods, including medical therapy and behavioral intervention, have been the initial treatments of choice for OAB symptoms. Although it is not known how bladder drills control and sustain OAB symptoms, it is believed that ill-trained voiding habits leading to decreased bladder compliance by frequent voiding are corrected by progressively increasing the voiding interval, thus increasing the voided volume and the functional bladder capacity. Retraining these patients to void at normal intervals would restore compliance when capacity increases and may contribute to symptom improvement (8-11). Observations on the electrical stimulation of the pelvic floor muscles (12) have shown that, during electrical stimulation, hyperactivity of the bladder was either diminished or completely abolished. This mechanism has been exploited as part of bladder training regimens. Clinically, bladder training has been shown to be effective in the management of urgency and urge incontinence (8, 13-15). When bladder training was compared with medical therapy (200 mg flavoxate hydrochloride plus 25 mg imipramine three times daily for 4 weeks), urge incontinence was completely controlled in 84% of patients in the BT group, with symptom relief in 76%, compared with 56% and 48%, respectively, in the medication group (16). A larger study (17), however, found that addition of behavioral interventions did not seem to offer any additional benefit to medical therapy (tolterodine) after 6 months. Urgency episodes and daytime frequency decreased 83% and 27.3%, respectively, in the tolterodine group, compared with 78.7% and 23%, respectively, in the combination group. Improvements in patients' perception of urinary symptoms were reported by 85.9% of the tolterodine group and 81.7% of the combination group. We found that all patients, a homogeneous group of female patients, regardless of treatment methodology, reported significant improvements in all voiding parameters examined, but that the addition of medication to BT resulted in additional improvements in urgency and frequency. Our finding of better outcomes in the Co than in the To group may be due to the bladder training program and design used here, which was a short-term intensive course of bladder drills. As many authors have emphasized, patients' understanding and compliance are the key factors in bladder training. Accordingly, the exercise program that we used did not simply require patients to contract and relax their pelvic floor muscles for certain periods of time. Instead, patients were given specific goals of 15 min each in sustaining contractions and were instructed how to increase these intervals. Having such a specific aim of what to accomplish with each exercise must have been enforcing and rewarding as well. In addition, repeated instructions, confirmation and reassurance were essential and constantly encouraged patients to stay on the right track for 12 weeks. Similar results have been reported (18), in which patients were provided with written information sheets describing the aims and endpoints of therapy as well as specific instructions on bladder drills. Although there were no formal bladder training sessions or additional follow-up for those in the bladder training-only group, the self-explanatory information sheets the patients kept would have reinforced and encouraged patients while on bladder drills. Thus, patients on medication plus bladder training reported significantly better results after therapy than did patients on medication alone. In the previous study, in which the drills were simplified, it may have facilitated patient adherence to treatment, but without any further reinforcements, the therapeutic benefits of BT may have been compromised (17). In this study, we found that patient adherence to treatment was lowest and discontinuation rate highest in the BT group, although the treatment efficacy results were similar at the end of 12 weeks. This may be attributed to the longer duration of patient maneuvering required before experiencing significant improvement in this BT-only group. Thus, despite many efforts by the physician to make patients stay on the correct treatment regimen, it was evidently easier for patients to take medication twice a day than to repeat exercises many times a day for months. We noted no adverse effects in the BT group, whereas, in the other 2 groups, the most common adverse effect was dry mouth. We found that 6.3% of patients in the BT group and 6.5% in the Co group had to discontinue medication due to severe dry mouth, percentages lower than those previously reported (19, 20). These results indicate that 4 mg/day tolterodine is safe and does not require dose adjustment. From these results, we conclude that bladder training, tolterodine and the combination of the two are all effective in controlling OAB symptoms, with comparable patient satisfaction. Since a short-term intensive course of bladder drill is as effective as medical or combined therapy in all voiding symptoms, but does not have any adverse effects, bladder training can be safely and effectively employed as first line treatment for OAB. In patients who have predominantly symptoms of frequency or urgency, tolterodine may be more effective, especially when combined with bladder training.\n\n**Question:** Compared to Tolterodine only (To) what was the result of Bladder training plus tolterodine (Co) on Urgency score improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
237
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A prospective, contralateral comparison of photorefractive keratectomy (PRK) versus thin-flap LASIK: assessment of visual function\n\n ABSTRACT.PURPOSE:\nTo compare differences in visual acuity, contrast sensitivity, complications, and higher-order ocular aberrations (HOAs) in eyes with stable myopia undergoing either photo-refractive keratectomy (PRK) or thin-flap laser in situ keratomileusis (LASIK) (intended flap thickness of 90 μm) using the VISX Star S4 CustomVue excimer laser and the IntraLase FS60 femtosecond laser at 1, 3, and 6 months postoperatively.\n\nABSTRACT.METHODS:\nIn this prospective, masked, and randomized pilot study, refractive surgery was performed contralaterally on 52 eyes: 26 with PRK and 26 with thin-flap LASIK. Primary outcome measures were uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), contrast sensitivity, and complications.\n\nABSTRACT.RESULTS:\nAt 6 months, mean values for UDVA (logMAR) were −0.043 ± 0.668 and −0.061 ± 0.099 in the PRK and thin-flap LASIK groups, respectively (n = 25, P = 0.466). UDVA of 20/20 or better was achieved in 96% of eyes undergoing PRK and 92% of eyes undergoing thin-flap LASIK, whereas 20/15 vision or better was achieved in 73% of eyes undergoing PRK and 72% of eyes undergoing thin-flap LASIK (P > 0.600). Significant differences were not found between treatment groups in contrast sensitivity (P ≥ 0.156) or CDVA (P = 0.800) at postoperative 6 months. Types of complications differed between groups, notably 35% of eyes in the thin-flap LASIK group experiencing complications, including microstriae and 2 flap tears.\n\nABSTRACT.CONCLUSION:\nUnder well-controlled surgical conditions, PRK and thin-flap LASIK refractive surgeries achieve similar results in visual acuity, contrast sensitivity, and induction of HOAs, with differences in experienced complications.\n\nBODY.INTRODUCTION:\nRefractive surgery is one of the most commonly performed elective procedures and will likely maintain its popularity as ablation techniques become more refined and understanding of corneal wound healing improves. Two of the most common methods of refractive surgery are photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK). The rapid improvement in vision and lack of postoperative pain associated with LASIK has made this the preferred option with patients compared with PRK, which has greater postoperative discomfort and prolonged recovery of visual acuity.1 Recently, there has been renewed interest in PRK because of increasing concerns of complications associated with LASIK flap creation, including dry eye, corneal ectasia, and flap tears.2–5 Thin-flap LASIK attempts to gain benefits of both techniques by creating a flap of between 80 and 90 μm.6–8 Use of a thinner flap results in a more biomechanically stable cornea and decreases incidence of ectasia given the thicker residual stroma.3,9 Cutting a thinner LASIK flap is less invasive to the nerves within the corneal stroma, decreasing the severity and duration of dry eye, possibly by preserving corneal sensation and blinking rate.10–14 Flap creation avoids corneal epithelium removal, allowing reduced healing time and less haze and scarring.15 The present contralateral study compares the outcomes of eyes that have undergone PRK or thin-flap LASIK using the VISX STAR S4 excimer laser (VISX Incorporated, Santa Clara, CA), with flaps created with intended thicknesses of 90 μm using the IntraLase FS60 femtosecond laser (Abbott Medical Optics [AMO], Santa Ana, CA).\n\nBODY.METHODS:\nData from myopic eyes were analyzed, with or without astigmatism, in which the dominant eye was randomized (Research Randomizer software – Urbaniak, www.randomizer.org) to PRK or thin-flap LASIK (90 μm flap) and the nondominant eye underwent the alternative treatment. All PRK and thin-flap LASIK treatments were performed using the VISX Star S4 CustomVue laser at the John A. Moran Eye Center, Salt Lake City, Utah, between February 2008 and July 2009. All surgeries were overseen by two surgeons (M.M., M.D.M.). The research protocol was approved by the University of Utah Hospital Institutional Review Board. All patients included in this study met the US Food and Drug Administration guidelines for VISX CustomVue LASIK. Mean age of patient, 13 men and 13 women, was 30.8 years (range: 23–46). Twenty-six patients (52 eyes) with stable myopia (1.5–8.5 diopters [D]) and astigmatism (0.242–3.11 D) were enrolled in the study. Eleven patients excluded from this study had clinically significant lens opacities, previous corneal or intraocular surgery, keratoconus, unstable refraction, autoimmune disease, immunosuppressive therapy, or were pregnant or breastfeeding. Correction was made for distance and patients desiring monovision correction were excluded. Contact lenses were discontinued 2 weeks prior to screening for soft contact lens wearers and 6 weeks prior to screening for rigid gas permeable lens wearers. All patients had a preoperative examination including assessment of uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), tonometry, slitlamp examination of the anterior segment, and dilated fundus examination. Manifest and cycloplegic refractions were repeated on 2 separate visits to ensure reliability and stability. Corneal topography and thickness were measured using the Orbscan II v.3.0 (Bausch and Lomb, Rochester, NY). All eyes received 5 preoperative wavefront analyses with the VISX CustomVue WaveScan aberrometer v.3.62 (Fourier) (AMO), without pharmacologic intervention, under mesopic conditions, with a minimum pupil diameter of 6.0 mm. The contralateral study design was made so that each eye could act as a control for the fellow eye in each patient, allowing for study groups to be well matched. There were no violations in the randomization; all patients were analyzed as originally assigned. The randomization protocol was generated before the trial and known only to the study coordinator. In all patients, the emmetropic correction target was based on manifest refraction and wavefront analysis. All flaps were created with the IntraLase FS60 femtosecond laser at 60 kHz in a raster pattern with bed energy of 1.15 μJ, side-cut energy of 2.00 μJ, and pocket enabled. The flaps were created with an intended thickness of 90 μm, diameter of 8.4 to 9.0 mm, superior hinge angle of 55°, and a side-cut angle of 70°. Intraoperative pachymetry or optical coherence tomography were not performed to evaluate actual flap thicknesses. If the 8.0 mm maximum intended ablation diameter exceeded the flap diameter, the hinge and flap were shielded during ablation. Postoperatively, each eye undergoing thin-flap LASIK received 1 drop of gatifloxacin 0.3% (Zymar; Allergan Inc, Irvine, CA), prednisolone acetate 1% (Pred Forte, Allergan Inc), ketorolac tromethamine 0.4% (Acular LS, Allergan Inc.), and a bandage soft contact lens (Softlens Plano T, Bausch and Lomb, Rochester, NY). The prednisolone acetate was continued hourly during the first preoperative day and 4 times daily for an additional 6 days. The gatifloxacin was continued 4 times daily for 1 week. In eyes undergoing PRK all eyes had their corneas cooled with 15 mL of BSS (2.8–3.9°C) immediately following ablation. This was followed by 1 drop of a gatifloxacin 0.3% (Zymar), prednisolone acetate 1% (Pred Forte), ketorolac tromethamine 0.4% (Acular LS) and a bandage soft contact lens (Softlens Plano T). Ketorolac tromethamine was continued 4 times a day for 3 days and then discontinued. Gatifloxacin and prednisolone acetate were continued 4 times a day for 1 week with a subsequent steroid taper over 2 to 3 months per surgeon preference. Mitomycin C was not administered to any patient in the study at any time. Both bandage soft contact lenses were removed simultaneously once re-epithelialization was complete, typically on postoperative days 3 to 5. Patients were seen 1 day, 1 week, 1 month ± 10 days, 3 months ±14 days, and 6 months ±14 days. At all follow-up examinations, UDVA and CDVA were tested using a standard Snellen eye chart. Visual acuity was recorded in both Snellen notation and logarithm of the minimum angle of resolution (logMAR) format. Contrast sensitivity was measured in controlled mesopic conditions at 3, 6, 12, and 18 cycles per degree (cpd) using the Vectorvision CSV-1000E chart (Vectorvision, Greenville, OH). Higher-order aberrations (HOAs), including coma Z(3,1), trefoil Z(3,3), and spherical aberration Z(4,0), were measured using the CustomVue WaveScan at a mean diameter of 6 mm. Undilated scans of both eyes were taken preoperatively and 1, 3, and 6 months postoperatively. Primary outcome measures were UDVA, CDVA, contrast sensitivity, and complications. HOAs were measured and trended within groups as secondary measures. After the study was completed, the results were compiled and the data unmasked for statistical analysis. Refractive error, visual acuity, and HOAs were treated as continuous variables and analyzed for significance by independent t-tests. In all tests, P values <0.05 were considered statistically significant. Data analysis was done using Microsoft Excel (Microsoft Corp, Redmond, WA).\n\nBODY.RESULTS:\nMean preoperative measurements of UDVA, CDVA, sphere, and cylinder are shown in Table 1. 25 of 26 patients (50 eyes) completed the study at postoperative 6 months. One eye in the thin-flap LASIK group required PRK retreatment following a flap tear and both eyes from this patient were therefore removed from analysis of visual acuity, contrast sensitivity, and HOAs as the retreatment prevented the ability to distinguish results between the 2 surgical methods. The eyes from this patient were still included in the analysis of complications.\n\nBODY.RESULTS.VISUAL ACUITY:\nTable 2 shows visual acuity outcomes at 1, 3, and 6 months postoperatively. Statistically significant differences were found between PRK and thin-flap LASIK in UDVA at 1 month postoperatively, with thin-flap LASIK eyes showing more improvement in UDVA. Visual acuities were not statistically different between the groups at 3 or 6 months.\n\nBODY.RESULTS.STABILITY, EFFICACY, AND PREDICTABILITY:\nTable 3 shows stability, efficacy, and predictability outcomes postoperatively at 1, 3, and 6 months. CDVA was statistically different between groups at 1 month, with 24% of the PRK group losing a line or more from preoperative values, while 9% of eyes in the thin-flap LASIK group lost only 1 line at 1 month. No eyes in the thin-flap LASIK group lost more than 1 line. Also, 39% of eyes in the thin-flap group gained a line by 1 month compared with only 12% of eyes in the PRK group. At 6 months 64% and 56% of eyes had gained a line or more of CDVA in the PRK and thin-flap LASIK groups, respectively (P = 0.462).\n\nBODY.RESULTS.CONTRAST SENSITIVITY:\nContrast sensitivity measurements at 3, 6, 12, and 18 cycles per degree (cpd) in each group are shown in Figure 1. There were no differences between groups at any cpd at any time in the study (P ≥ 0.156). The thin-flap LASIK group showed no change in contrast sensitivity postoperatively (P > 0.131), while patients in the PRK group had a slight decrease in contrast sensitivity at 1 month seen at 3 and 12 cpd (P = 0.004) and (P = 0.025), respectively. At 6 months contrast sensitivity in the PRK group was still significantly decreased from baseline at 3 cpd (P = 0.013), although it did not reach a statistically significant difference at 3 months (P = 0.101).\n\nBODY.RESULTS.COMPLICATIONS:\nTypes of complications differed between the 2 groups. In the PRK group, 2 cases of epithelial defects occurred by 1 week, but had completely resolved by 6 months. Three eyes in the PRK group had mild haze appearing as early as 1 week postoperatively. Haze remained in only 1 eye at 6 months, but was classified as minimal and had no effect on UDVA or CDVA. Nine eyes (35%) in the thin-flap LASIK group experienced complications. In the thin-flap LASIK group, flap debris (1 eye), diffuse lamellar keratitis (DLK, 1 eye), and an epithelial cyst at the edge of 1 flap were observed, with no loss of UDVA or CDVA, and all resolved by 6 months. Microstriae were observed in 6 eyes, one of which was the eye described above with flap debris and the other was the eye with DLK, with no associated loss of UDVA or CDVA, with epithelial proliferation noted as filling the microstria and making them less apparent. Two eyes in the thin-flap LASIK group experienced flap tears intraoperatively – one resulting in mild flap edge scarring by 6 months that had no significant effect on visual function, and the other case affecting vision at 1 month postoperatively which was retreated with PRK at 3 months. As a result of the retreatment with the counter surgical technique, the ability to accurately compare visual acuity, contrast sensitivity, and HOAs between the 2 surgical methods was limited and both eyes from this patient were removed from analysis of these measures, but were still included in the analysis of complications.\n\nBODY.RESULTS.HIGHER-ORDER ABERRATIONS:\nAt postoperative 1, 3, and 6 months, 24 (96%), 25 (100%), and 24 (96%) eyes, respectively, in each group completed CustomVue WaveScan analysis. Total root-mean square (RMS) HOAs, coma, trefoil, and spherical aberrations are compared in Figure 2. There were no significant differences between groups in any HOAs throughout the study (P ≥ 0.101), with all P values at 6 months ≥0.63. In both groups, total HOAs (P < 0.008), spherical (P < 0.002), and coma (P = 0.008 at 3 months; P = 0.024 at 6 months) aberrations were significantly increased compared with preoperative conditions. Trefoil showed no significant change throughout the study in either group (P = 0.298).\n\nBODY.DISCUSSION/CONCLUSION:\nThe present study confirms that PRK and thin-flap LASIK are effective surgeries for the correction of low to moderate myopia. Although thin-flap LASIK showed superior visual results in the early postoperative period there was no statistically significant difference in outcomes of UDVA, CDVA, contrast sensitivity, or total RMS HOAs between PRK and thin-flap LASIK by 6 months. In a similar study comparing PRK and thin-flap LASIK, Slade et al also found that UDVA results were better in the thin-flap group early on and equalized by 6 months.16 Our study showed a similar trend, with no significant differences in any of the primary outcomes at 6 months, and with no difference in UDVA at 3 months. Visual regression in our study was similar to outcomes in Slade's study in which 42% of the PRK group lost a line or more of CDVA and 22% of the thin-flap LASIK group lost 1 line at 1 month postoperatively. Despite the use of custom ablation, postoperative increases in total HOAs, sphere, and coma were noted in our study, as also seen by Slade et al, although they noted that the increase in sphere and coma aberrations was significantly higher in the PRK group at 1 and 3 months postoperatively. As found in previous studies, there was no significant change found in trefoil at any time postoperatively.17,18 Our study showed no difference in induction of HOAs between groups at any time. Although increases in HOAs after refractive surgery have been correlated with decreases in contrast sensitivity in other studies, we demonstrate that increases in total RMS, sphere, and coma were seen postoperatively in both groups without a reliable decrease in contrast sensitivity.19,20 Slade's group found that contrast sensitivity was better in the thin-flap group at all postoperative points in the study, which may have been related to their finding of lower induction of sphere and coma aberrations in the thin-flap group compared with the PRK group. The authors recognize that the Slade study had a larger population size (n = 50 per group) and would have increased power to detect significant differences. Our study would have had increased power of analysis with a similar study group size, but results from analysis of HOAs would not likely change as P values for all HOAs at 6 months were ≥0.63. It would be difficult to make any such correlation between contrast sensitivity and HOAs from the results of this study. A loss of CDVA has been associated with the development of corneal haze in other studies, but as mentioned above none of the patients with visual regression developed haze.21–23 Findings in other studies showing that the biomechanics of eyes that have received thin-flap LASIK treatment are indistinguishable from those of PRK have led to suggestions that thin-flap LASIK is the best approach to LASIK.16 Although the present study did not find any statistically significant differences between thin-flap LASIK and PRK in terms of visual quality at 6 months, complications dealing with flap integrity in the thin-flap LASIK group were present which are not complications found in PRK. Although PRK remains a viable option for those unable to undergo LASIK, the use of thinner flaps may eliminate some of the complications seen with traditional LASIK. Larger studies are needed to better compare the complication rates of both methods and to determine how effective thin-flap LASIK will be in achieving the benefits of PRK and LASIK while avoiding the risks associated with each method. While thinner LASIK flaps attempt to preserve the biomechanical stability of the corneal stroma, at the same time, the flap itself becomes less stable, as was noted with the 2 flap tears and other complications occurring in the thin-flap LASIK group in this study. A study by Espandar and Meyer24 showed that most complications in flaps created by IntraLase femtosecond laser occurred at the hinge, which is where the 2 flap tears that occurred in this study. A thinner flap hinge would be biomechanically less stable and would increase the likelihood of intraoperative flap tear occurrence as well. Six of the 9 eyes with complications in the thin-flap LASIK group had microstriae, which are caused by the flattening of a weak corneal flap unable to maintain its curvature over the small area of stroma removed during ablation. The biomechanics of the flap and hinge, however, cannot be evaluated by the design of this study as analysis was done based on intended flap thickness, which has been shown to vary with the IntraLase FS60 femtosecond laser.25 The study could have been strengthened had intraoperative pachymetry or OCT been performed. Creating a flap with increased integrity would help prevent microstriae from forming and would also provide for a stronger hinge that would be less susceptible to flap tear. Possible ways to optimize flap integrity include modification of hinge and side-cut angle creation, as well as improved planarity and microdisruption of flap edges. This will allow improved adhesion of the flap to the underlying stroma. Continued improvements in laser technology may allow for safer creation of thinner flaps, helping to provide evidence for superior outcomes in thin-flap LASIK, permitting the biomechanical stability of PRK with the visual recovery of LASIK. Custom flap formation that minimizes weak areas susceptible to tearing will be helpful in achieving this difficult balance between corneal and flap integrity.\n\n**Question:** Compared to Pre-surgery baseline what was the result of Photo-refractive keratectomy (PRK) on Contrast sensitivity at 1 month seen at 3 cpd?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
423
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Six-week high-intensity exercise program for middle-aged patients with knee osteoarthritis: a randomized controlled trial [ISRCTN20244858]\n\n ABSTRACT.BACKGROUND:\nStudies on exercise in knee osteoarthritis (OA) have focused on elderly subjects. Subjects in this study were middle-aged with symptomatic and definite radiographic knee osteoarthritis. The aim was to test the effects of a short-term, high-intensity exercise program on self-reported pain, function and quality of life.\n\nABSTRACT.METHODS:\nPatients aged 36–65, with OA grade III (Kellgren & Lawrence) were recruited. They had been referred for radiographic examination due to knee pain and had no history of major knee injury. They were randomized to a twice weekly supervised one hour exercise intervention for six weeks, or to a non-intervention control group. Exercise was performed at ≥ 60% of maximum heart rate (HR max). The primary outcome measure was the Knee injury and Osteoarthritis Outcome Score (KOOS). Follow-up occurred at 6 weeks and 6 months.\n\nABSTRACT.RESULTS:\nSixty-one subjects (mean age 56 (SD 6), 51 % women, mean BMI 29.5 (SD 4.8)) were randomly assigned to intervention (n = 30) or control group (n = 31). No significant differences in the KOOS subscales assessing pain, other symptoms, or function in daily life or in sport and recreation were seen at any time point between exercisers and controls. In the exercise group, an improvement was seen at 6 weeks in the KOOS subscale quality of life compared to the control group (mean change 4.0 vs. -0.7, p = 0.05). The difference between groups was still persistent at 6 months (p = 0.02).\n\nABSTRACT.CONCLUSION:\nA six-week high-intensive exercise program had no effect on pain or function in middle-aged patients with moderate to severe radiographic knee OA. Some effect was seen on quality of life in the exercise group compared to the control group.\n\nBODY.BACKGROUND:\nExercise is considered to be one of the most important treatments for patients with mild to moderate knee osteoarthritis [1,2]. Positive effects on pain and function, as well as cost-effectiveness have been reported [3,4]. The effect size obtained on pain experience is similar to that of pharmacological treatment [3,5]. The side effects have also been reported to be favorable, including reduced risk of inactivity-related disorders, such as cardiovascular disease and diabetes [2,6,7]. Moderate levels of physical activity are not associated with radiographic progression, but activities involving a risk of severe knee injury are closely related to increased risk of developing radiographic knee osteoarthritis [8-12]. The dose-response relationship of exercise on symptoms and function is not clear and exercise recommendations in osteoarthritis guidelines are based mostly on studies on elderly people, i.e. mean age ≥ 65 [3,13-15]. It is not clear whether exercise has a similar effect on pain and function in middle-aged patients compared with elderly patients. The aim of this study was to examine the effects of a short-term, high-intensity exercise program in middle-aged subjects (age 36–65) with definite radiographic knee osteoarthritis on self-reported pain, function, and quality of life.\n\nBODY.METHODS.SUBJECTS:\nA flow chart of the recruitment process is given in Figure 1. Radiologists and orthopedic surgeons at the Halmstad County Hospital, in the south-west of Sweden, and general practitioners within the catchments area of this hospital, were informed about the study and asked to list patients with radiographic knee osteoarthritis on a \"patients eligible for research\" list. Between October 1998 and October 2001 121 patients, referred by their general practitioner for radiographic examination because of long standing knee pain, were listed. Ninety-seven fulfilled the inclusion criteria: age 35–65, living in the defined geographic area, and diagnosis of radiographic osteoarthritis of Kellgren and Lawrence grade III or more, i.e. definite osteophytes and joint space narrowing. All listed patients received written information about the study. One week after the information was sent, patients were contacted by telephone, and invited to participate in the study. Twenty-eight patients declined participation for various reasons, the most common reason being lack of time and interest. To ensure only patients with symptoms due to knee osteoarthritis and eligible for exercise intervention were included, the following exclusion criteria were used: inflammatory joint disease, anterior cruciate ligament injury, known symptomatic injury to the menisci, hip symptoms more aggravating than the knee symptoms, about to have knee replacement surgery within 6 months, and co-morbidities not allowing exercise (Figure 1). Figure 1Flowchart of recruitment process and included patients. When eight or more patients fulfilled the inclusion criteria they were invited to baseline interview and examination for determination of exclusion criteria. Randomization was performed after the baseline examination. All patients were informed that they could be randomly allocated to either the exercise or the control group. After written informed consent, sixty-five subjects were randomized. Randomization was performed by the patient drawing a sealed envelope containing a piece of folded paper with either the word \"exercise\" or \"control\" written on it. Four persons were falsely randomized (one was too old at inclusion, one had severe hip osteoarthritis, one had fibromyalgia, and one had only joint space narrowing and no significant osteophytes), and thus 61 subjects entered the study. Thirty persons were allocated to the exercise group and 31 to the control group. Patients in the control group were offered exercise classes after the six-month follow-up period.\n\nBODY.METHODS.SUBJECTS.EXERCISE GROUP:\nThe number of participants exercising together varied from two to nine. There were eight intervention groups in all. One-hour exercise sessions, twice a week for six weeks, were supervised by a physical therapist (CT). The program consisted of weight-bearing exercises aimed at increasing postural control and endurance and strength in the lower extremity (see additional file 1 for the complete exercise program). Exercises were performed at five stations at submaximal intensity (minimum 60% of maximum heart rate (HRmax)). Intensity was gradually and individually increased during the six weeks by increased lever arms or range of motion. Patients were encouraged to exercise at their most vigorous intensity possible, without losing quality in performance or severely exacerbating pain. Pain during exercise was not considered as an obstacle if the patient perceived it as \"acceptable\" and no increased symptoms were persistent after 24 hours [16]. If pain exceeded this level, exercise intensity was reduced occasionally, until the \"acceptable\" level was found. On every occasion, the heart rate of two random participants was estimated at each station using Polar pulsimeters (Polar® Accurex Plus, Polar, Sweden). The other patients had their heart rate measured by the physical therapist or themselves, palpating their carotid arteries. Notes were taken by the supervising physical therapist, on every occasion and on all patients, about exercise intensity, heart rate, and perceived exertion according to Borg's Rate of Perceived Exertion scale (RPE) at each station [17]. These data were used to give the physical therapist a view of exercise intensity and to assure the preservation or increase of intensity from time to time. Patients were encouraged to keep up and increase intensity whenever possible throughout the six weeks. Patients received a thera band to perform daily pulley exercises at home. In addition, three exercises, which were considered as the most challenging to the individual, were chosen as daily home exercises. Patients were recommended to perform some kind of weight bearing submaximal activity, such as walking or their home exercises, for at least 30 minutes or two times 15 minutes every day.\n\nBODY.METHODS.SUBJECTS.CONTROL GROUP:\nThe controls were told not to make any lifestyle changes. They met the physical therapist (CT) for one hour at three times; baseline, follow up at 6 weeks and 6 months. After six months they were offered exercise classes or instructions and a home-exercise program.\n\nBODY.METHODS.OUTCOME MEASURES.PRIMARY OUTCOME:\nThe primary outcome measure was the disease-specific Knee injury and Osteoarthritis Outcome Score (KOOS) [18,19]. The KOOS assesses the patients' self-report of pain, other symptoms, activities of daily living, sport and recreation function, and knee-related quality of life, in 42 questions which take about 10 minutes to complete. The KOOS is scored from 0 to 100, separately for each subscale, 0 indicating extreme problems and 100 indicating no problems. A change of 10 points or more is considered a clinically significant change [20]. The questionnaire and scoring manual can be found at . The Western Ontario and McMaster Osteoarthritis Index (WOMAC) [21] is included in the KOOS, and WOMAC scores can also be calculated.\n\nBODY.METHODS.OUTCOME MEASURES.SECONDARY OUTCOME:\nSecondary outcome measures were the Short Form-36 item (SF-36), ergometer test, and five tests of functional performance. The SF-36 is a generic, widely used measure of general health status, which comprises eight subscales: Physical Functioning (PF), Role-Physical (RP), Bodily Pain (BP), General Health (GH), Vitality (VT), Social Functioning (SF), Role-Emotional (RE) and Mental Health (MH) [22]. The SF-36 is self-explanatory and takes about 10 minutes to complete. The SF-36 is scored from 0 to 100, 0 indicating extreme problems and 100 indicating no problems. The subscales assessing mainly physical components (PF, RP, BP, GH) were summarized to a physical component summary score (PCS), and the mental subscales (VT, SF, RE, MH) to a mental component summary score (MCS) [23]. Values are norm-based scored, using the U.S. general population norms from 1998. Each scale has the mean of 50 and standard deviation of 10. Scale score below 50 indicates a health status below average, and a score above 50 indicates a health status above average. Questionnaires were distributed prior to randomization at baseline, after 6 weeks, and 6 months. A bicycle ergometer test and five tests of functional performance were assessed (Figure 2). Figure 2Tests of functional performance. A) Åstrand's cycle-ergometer test [24]. B) Rising on one leg from sitting on lowest possible height [25,26]. C) One-leg hop [25], [27]. D) Lateral step-up [28]. E) One-leg semi squatting; maximum number during 30 sec. [26]. F) Heel-rising on one leg; maximum number during 20 sec. [26], [29]. 1. Åstrand's bicycle-ergometer test [24] (Fig 2A). 2. Rising on one leg, from sitting on lowest possible height [25,26] (Fig 2B). 3. One-leg hop [25,27] (Fig 2C). 4. Lateral step-up [28] (Fig 2D). 5. One-leg semi squatting; maximum number during 30 seconds [26] (Fig 2E). 6. Heel-raising on one leg; maximum number during 20 seconds [26,29] (Fig 2F). Tests of functional performance were recorded on three occasions: prior to randomization at baseline, after 6 weeks, and at 6 months. To assess compliance the number of exercise occasions attended was noted.\n\nBODY.METHODS.STATISTICS:\nPost-hoc, a power analysis was performed to estimate the number of patients needed to show a clinically significant difference between groups. Estimating the least clinical significant difference to be 11 ± 15 KOOS points, a total of 30 subjects in each group were needed to detect a difference with 80% power, p = 0.05. Data were analyzed using nonparametric tests. P-values of less than or equal to 0.05 were considered to be significant, and all tests were two-tailed. To compare groups, Mann-Whitney U-test was used. Friedman's test was used for repeated measures analysis of variance. Six weeks was considered as the time-point of primary interest, and 6 months as follow-up. Wilcoxon signed rank test was performed to compare changes from baseline to six weeks and 6 months respectively. Analyses were performed using SPSS 12.0.1 for Windows [30]. The study was approved by the Research Ethics Committee at Lund University, Sweden (LU 99–98), and is in compliance with the Helsinki Declaration.\n\nBODY.RESULTS.SUBJECTS:\nThe mean age of the 61 included subjects was 56 ± 6 years, and the mean BMI was 29.5 ± 4.8 kg/m2. Patient characteristics are shown in table 1. Twenty-eight patients in each group were available for follow-up. The reasons for dropout were lack of time, reorganization at work, sudden illness, and increased knee symptoms (Figure 1). There were no clinically significant differences in baseline characteristics between the groups. Patient characteristics are shown in table 1. Table 1 Patient characteristics at baseline Exercise group n = 30 Control group n = 31 p-value Age (years) mean ± SD (range) 54.8 ± 7.1 (36–64) 57.3 ± 4.7 (46–65) 0.16 Gender number (%) women 15 (50%) 16 (52%) 0.90 BMI (kg/m) 2 mean ± SD 29.6 ± 4.5 29.5 ± 5.1 0.78 Aerobic capacity (ml O 2 /kg x min) mean ± SD 25.9 ± 6.4 25.2 ± 4.9 0.66 KOOS* pain 60 ± 18 64 ± 19 0.38 KOOS* symptoms 63 ± 20 66 ± 18 0.67 KOOS* ADL 69 ± 18 71 ± 21 0.76 KOOS* sport & recreation 34 ± 31 37 ± 29 0.54 KOOS* QOL 40 ± 15 46 ± 21 0.31 Knee Injury and Osteoarthritis Outcome Score [ 18 , 19 ] Score from 0–100, worst to best. \n\nBODY.RESULTS.COMPLIANCE:\nThe total number of performed supervised exercise sessions by the 28 patients available for follow-up in the intervention group was 302/336 (89.9%). Patients participated on average in 11 out of 12 possible exercise classes (12 classes (n = 11), 11 (n = 9), 10 (n = 6), 9 (n = 1), 2 (n = 1)). The most common reason for absence was illness not related to knee osteoarthritis, and work-related lack of time.\n\nBODY.RESULTS.BETWEEN-GROUP DIFFERENCES:\nThere was no difference between groups in pain or self-estimated function at either 6 week or 6 month follow-up. Quality of life improved significantly in the exercise group compared to the control group at 6 weeks (4.0 vs. -0.7, p = 0.05) and the results persisted at 6 months (5.1 vs. -2.3, p = 0.02, Table 2) Table 2 Comparisons of change in Knee injury and Osteoarthritis Outcome Score (KOOS) ‡ [ 18 , 19 ] subscales between exercise and control group KOOS Subscales Exercise group Control group Change in KOOS score* 95 % CI Change in KOOS score* 95 % CI p † Pain 6 w 1.8 -3.2 – 6.8 -0.3 -6.2 – 5.7 0.49 6 m 3.1 -1.9 – 8.2 -1.1 -6.6 – 4.4 0.32 Symptom 6 w 0.2 -5.1 – 5.6 -3.8 -7.7 – 0.0 0.07 6 m 1.0 -3.8 – 5.8 -3.4 -8.8 – 1.9 0.31 ADL 6 w 2.0 -2.3 – 6.3 -0.6 -7.0 – 5.8 0.96 6 m 0.9 -3.8 – 5.6 -1.9 -7.7 – 3.9 0.61 SportRec 6 w 1.2 -7.9 – 10.4 -4.4 -12.6 – 3.7 0.22 6 m 0.5 -10.1 – 11.2 -8.3 -19.5 – 2.8 0.32 QOL 6 w 4.0 -0.4 – 8.5 -0.7 -5.6 – 4.3 0.05 6 m 5.1 -0.7 – 11.0 -2.3 -9.5 – 4.9 0.02 * negative = worsening, positive = improvement † p-value for between group differences in change over time ‡ [ 18 , 19 ] The individual differences ranged from clinically significant improvement of at least 10 points to clinically significant deterioration in all KOOS subscales and in both the exercise and control group (Figure 3). Figure 3Change in KOOS pain score at six weeks. The individual change in KOOS pain at 6 weeks compared to baseline ranged from improvement to worsening, in both exercise and control group. A change of 10 points or more is considered clinically significant [20]. A similar pattern was seen for all KOOS and SF-36 subscales.\n\nBODY.RESULTS.BETWEEN-GROUP DIFFERENCES.SECONDARY OUTCOMES:\nA significant improvement was found in the exercise group compared to the control group at six weeks with regard to the SF-36 Mental Component Summary scale (MCS) (2.1 vs -1.6, p = 0.04). At six months follow-up this difference was no longer persistent (Table 3). Table 3 Comparisons of change in SF-36 Physical and Mental Components Summaries (PCS and MCS) [ 23 ] between exercise and control group. Short Form-36 item (SF-36) [22] Exercise group Control group mean 95%CI mean 95%CI p* Physical Component Summary (PCS) Baseline 42.5 24.4 – 57.5 43.8 24.2 – 57.3 0.49 Change at 6 weeks 3.0 -5.9 – 13.4 0.3 -15.2 – 12.6 0.13 Change at 6 months 3.0 -5.9 – 16.3 -0.7 -14.8 – 9.8 0.09 Mental Component Summary (MCS) Baseline 55.6 40.2 – 66.2 56.3 37.0 – 67.0 0.63 Change at 6 weeks 1.6 -10.6 – 15.0 -2.1 -16.9 – 11.5 0.04 Change at 6 months 0.7 -18.1 – 13.2 -0.7 -16.8 – 12.8 0.40 * Comparison between groups Improvements in functional performance of 0–20 % were seen in both groups at six weeks and six months. There was no difference in improvement between exercisers and controls (p = 0.08–0.9). See additional file 2 for the change in functional performance.\n\nBODY.DISCUSSION.MAIN MESSAGE:\nSix weeks of intensive exercise had no effect on self-reported pain or function in middle-aged patients with symptomatic and moderate-severe radiographic knee osteoarthritis.\n\nBODY.DISCUSSION.COMPARISONS WITH OTHER STUDIES:\nQuite opposite to previously published studies on exercise in knee osteoarthritis we found no improvement in pain or function. Possible reasons for this include our study group having moderate to severe osteoarthritis compared with mild to moderate in most previous studies, being younger than previously studied groups and the intervention being of comparably high intensity. It has been suggested that the responsiveness to exercise is modified by the loss of joint space width [31]. The homogeneity of this study population, with regard to radiographic changes, provided us the possibility to study the effects of exercise on patients with moderate to severe radiographic knee osteoarthritis. Can significant improvements of pain or self-reported function be expected in patients with radiographic knee osteoarthritis corresponding to Kellgren & Lawrence grade 3 or more? In this study, no improvements were seen on group level in pain or function. However, regular exercise in general is important to prevent diseases caused by inactivity [6], and thus patients with knee osteoarthritis should be encouraged to exercise. In clinical practice, patients with severe knee osteoarthritis should have treatments based on individual preferences and different stages of motivation [32]. It can be argued that the exercise intensity was too high for this group with moderate to severe knee osteoarthritis. Even though the intensity of each exercise was individually adapted, all individuals exercised at a minimum of 60% of HR max. It has been suggested that pain during exercise might be a protective mechanism in knee osteoarthritis, i.e. an increase in pain from too intensive exercises may restrain patients from further joint loading, which otherwise could cause further cartilage damage [33]. Patients in the current study were told to reduce the exercise intensity if pain during exercise was perceived as worse than 'acceptable', or persisted more than 24 hours. It is suggested that the different degrees of varus-valgus laxity should be taken into account in exercise interventions, to enhance the functional outcome [34,35]. Severe knee osteoarthritis is associated with a hip-knee-ankle malalignment and an increase in varus-valgus laxity compared to healthy knees [36]. It is possible that varus-valgus laxity mediated the effect of exercise on pain since all patients had radiographic changes corresponding to Kellgren and Lawrence grade III or more. Malalignment may cause increased joint loads, and greater quadriceps strength might further increase joint load by the muscles compressing the articular surfaces [37]. Younger patients are usually more physically active than elderly [38], and have higher demands on level of physical function and physical performance at work or leisure time. Thus, moderate to severe knee osteoarthritis might be perceived as more disabling by younger individuals compared to elderly. Our study population was younger (<65 years) and comprised more men (49%) than most other populations with knee osteoarthritis described in randomized controlled trials of exercise [13-15,39,40], which might have reduced the effect on self-reported function in the present study. This study showed no significant differences on self-reported pain and function either between or within groups. A post-hoc analysis was performed to study the possibility that the benefit from exercise was larger in subjects with worse pain at baseline. Fifteen patients in the exercise group were compared to 13 from the control group who had worse than total group median pain score (KOOS Pain 58 on a 0–100, worst to best scale) at baseline. The groups had comparable patient characteristics. The changes seen in these subgroups were however not different from the changes seen in the total groups. A possible limitation could be lack of power. A post-hoc analysis was performed to estimate the number of patients needed to show a clinically significant difference of 11 ± 15 KOOS-points [20]. The standard deviation of 15 is supported by results from randomized controlled trials of glucosamine supplementation [41] and a nutritional supplement [42] for knee osteoarthritis, where significant group differences were found in KOOS pain and ADL subscales. The number of subjects in each treatment arm in these RCT:s ranged from 15 to 27. The standard deviations in KOOS subscales have not previously been determined in exercise interventions. Only one of the five KOOS subscales showed a statistical significant improvement, and it can not be excluded that this result could be due to chance. The improvement of the KOOS subscale Quality Of Life in the exercise group was of doubtful clinical significance, however the improvement persisted over time, and is in accordance with previous findings of impact from exercise on mental health aspects in patients with knee osteoarthritis [31,43,44]. Group dynamics, support, or attention received may possibly have influenced the quality of life more than the exercise itself in the present study. Psychosocial factors are important determinants of physical function [45], and our results suggest that supervised exercises and follow-up are important, and that quality of life should be evaluated in osteoarthritis interventions.\n\nBODY.CONCLUSION:\nA six-week high-intensive exercise program had no effect on pain or function in middle-aged patients with moderate to severe radiographic knee OA. Some effect was seen on quality of life in the exercise group compared to the control group.\n\nBODY.COMPETING INTERESTS:\nThe author(s) declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nCT participated in design, exercise intervention, assessments and follow-ups, statistical analyses and writing. ER participated in design, analysis and interpretation of the data, and critically revised the article. IP participated in design and interpretation of the data and critically revised the article. CE initiated and obtained necessary permissions for the study, arranged the initial funding, and participated in the analyses, interpretation and revision of the manuscript. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: \n\nBODY.SUPPLEMENTARY MATERIAL:\nAdditional File 1Intensive exercise program CarinaClick here for file Additional File 2Tests of functional performance CarinaClick here for file\n\n**Question:** Compared to No exercise what was the result of Exercise on Knee injury and Osteoarthritis Outcome Score quality of life subscale at 6 months?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
323
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 160/9 μg on Mean FEV1 at 12 hours on the day of randomization?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
316
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 160/9 μg on 1-hour post-dose FEV1 improvements?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
217
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A heart failure self-management program for patients of all literacy levels: A randomized, controlled trial [ISRCTN11535170]\n\n ABSTRACT.BACKGROUND:\nSelf-management programs for patients with heart failure can reduce hospitalizations and mortality. However, no programs have analyzed their usefulness for patients with low literacy. We compared the efficacy of a heart failure self-management program designed for patients with low literacy versus usual care.\n\nABSTRACT.METHODS:\nWe performed a 12-month randomized controlled trial. From November 2001 to April 2003, we enrolled participants aged 30–80, who had heart failure and took furosemide. Intervention patients received education on self-care emphasizing daily weight measurement, diuretic dose self-adjustment, and symptom recognition and response. Picture-based educational materials, a digital scale, and scheduled telephone follow-up were provided to reinforce adherence. Control patients received a generic heart failure brochure and usual care. Primary outcomes were combined hospitalization or death, and heart failure-related quality of life.\n\nABSTRACT.RESULTS:\n123 patients (64 control, 59 intervention) participated; 41% had inadequate literacy. Patients in the intervention group had a lower rate of hospitalization or death (crude incidence rate ratio (IRR) = 0.69; CI 0.4, 1.2; adjusted IRR = 0.53; CI 0.32, 0.89). This difference was larger for patients with low literacy (IRR = 0.39; CI 0.16, 0.91) than for higher literacy (IRR = 0.56; CI 0.3, 1.04), but the interaction was not statistically significant. At 12 months, more patients in the intervention group reported monitoring weights daily (79% vs. 29%, p < 0.0001). After adjusting for baseline demographic and treatment differences, we found no difference in heart failure-related quality of life at 12 months (difference = -2; CI -5, +9).\n\nABSTRACT.CONCLUSION:\nA primary care-based heart failure self-management program designed for patients with low literacy reduces the risk of hospitalizations or death.\n\nBODY.BACKGROUND:\nLimited literacy skills are common among adults in the United States [1]. Low literacy is associated with increased risk of hospitalization and worse control of chronic diseases [1-4]. Heart failure is a common chronic illness requiring multiple medications and significant self-care. Heart failure is the leading cause of hospitalization in the Medicare population [5]. The complexity of care for heart failure puts people with low literacy at considerable risk for adverse outcomes including hospitalization, worse quality of life, and mortality. Heart failure disease-management interventions appear effective in reducing rehospitalizations and improving quality of life [6]. Most randomized clinical trials of heart failure disease management completed over the last 10 years have enrolled patients during, or shortly after, hospitalization and reported the outcome of readmission [6]. Although the designs of these programs vary, several have tested education and support to enhance patient self-management as the main component of the intervention [7-10]. The content of self-management education usually includes teaching to understand medications, reduce salt intake, monitor daily weights, and recognize symptoms. Most programs include structured follow-up either by home visit, phone, or mail. Only a few, uncontrolled studies specifically ask patients to self-adjust their diuretics [11,12]. Heart failure self-management programs may be particularly effective for vulnerable populations, such as those with poor literacy [13,14]. However, to our knowledge, no previous studies have explicitly examined the role of self-management programs in a low literacy population. A recently published study and accompanying editorial suggested that such self-management support may be most effective among vulnerable populations [13,14]. Low literacy may represent a vulnerability for which we should design our programs. Disease management for patients with low literacy may require refined approaches to foster self-management skills. We developed a heart failure self-management program for use by patients with a variety of literacy levels [15]. We performed a randomized controlled trial comparing our self-management program to usual care among outpatients to test if the program could reduce hospitalizations and improve heart failure-related quality of life.\n\nBODY.METHODS.STUDY DESIGN:\nWe conducted a randomized controlled trial in the University of North Carolina (UNC) General Internal Medicine Practice, which serves a wide socioeconomic range of patients. The practice, staffed by over 20 attending faculty and 70 medical residents, cares for over 500 patients with heart failure.\n\nBODY.METHODS.STUDY PARTICIPANTS:\nTo be eligible, patients had to have a clinical diagnosis of heart failure confirmed by their primary provider through a direct interview, and one of the following: 1) chest x-ray findings consistent with heart failure, 2) ejection fraction <40% by any method, or 3) a history of peripheral edema. They also had to have New York Heart Association class II-IV symptoms within the last 3 months. Patients were excluded if they had moderate to severe dementia (based on the treating physician's clinical judgment), terminal illness with life expectancy less than 6 months, severe hearing impairment, blindness, current substance abuse, a serum creatinine >4 mg/dl or on dialysis, a requirement of supplemental oxygen at home, lacked a telephone, or were scheduled to undergo cardiac surgery or awaiting heart transplant. We did not exclude patients on the basis of literacy skill because we felt patients of all literacy levels would benefit from this intervention. Patients who read well often prefer information presented in an easy-to-read format [16]. We accepted referrals from the cardiology clinic if patients met eligibility criteria. This study was approved by the Institutional Review Board of the UNC School of Medicine, and all participants gave informed consent prior to enrollment. For participants who could not adequately read the informed consent document, the research assistant read and explained it to them. They were asked to sign a short form indicating that the informed consent document was reviewed and they agreed to participate. When the short form was used, a witness was asked to attest to the adequacy of the consent process.\n\nBODY.METHODS.PROCEDURES:\nParticipants were recruited between November 2001 and April 2003 from the General Internal Medicine and Cardiology Practices at UNC Hospitals. A trained research assistant screened all patients age 30–80 for use of furosemide. If the patient was on furosemide, their physician was queried about the presence of heart failure. If the patient met eligibility criteria and consented to participate, baseline data were collected. We then randomized patients by concealed allocation based on a random number generator to receive the intervention or usual care. All patients were followed for one year. All data were collected in the General Internal Medicine Practice.\n\nBODY.METHODS.INTERVENTION:\nThe intervention was delivered in the General Internal Medicine Practice. The educational materials and disease management intervention were previously described in detail, and the intervention is summarized here [15]. The intervention began with a 1-hour educational session with a clinical pharmacist or health educator during a regular clinic visit. Patients were given an educational booklet designed for low literacy patients (written below the 6th grade level and extensively pre-tested in focus groups and a pilot study [15]) and a digital scale. The educator and patient reviewed the booklet together, including management scenarios. As part of the educational session, patients were taught to identify signs of heart failure exacerbation, perform daily weight assessment, and adjust their diuretic dose. Because this intervention was aimed at patients with low literacy, the health educator used pedagogic strategies felt to improve comprehension for patients with low literacy [17]. For example, the educator had the patient teach back the information [18], engaged the patient in filling out the notebook, and used brainstorming to help the patient incorporate self-management into their lives. The educator, patient, and primary care physician collaborated to establish the patient's \"good weight\" (i.e., where the patient's heart failure was stable) and baseline diuretic dose. The educator then filled in the management plan in the patient's notebook to help the patient better manage weight fluctuations and self-adjust the diuretic dose based on weight (Figure 1). The general plan involved doubling the dosage if weight went up and halving it if weight went down. The program coordinator then made scheduled follow-up phone calls (days 3, 7, 14, 21, 28, 56) and monthly during months 3–6. The follow-up phone calls, each lasting 5–15 minutes, were designed to reinforce the educational session and provide motivation for the patients. Again, the program coordinator had the patient describe their self-management practices and offered feedback to improve them. Patients experiencing worsening symptoms were scheduled acute visits with their physician. We did not provide specialized nursing assessment, care or medication advice beyond diuretic dosing. If the patient's doctor determined that the good weight had changed, the program coordinator would revise the care plan with the patient. Patients enrolled in the control group received a general heart failure education pamphlet written at approximately the 7th grade level, and continued with usual care from their primary physician. The only contacts between the research team and the control patients were at enrollment and data collection.\n\nBODY.METHODS.MEASURES:\nWe assessed outcomes at 6 and 12 months through in-person interviews and review of the medical record. To be sensitive to low literacy, all interviews were conducted verbally by a trained research assistant. If patients were unable to come to clinic for the interview, it was conducted by phone. The research assistant was not blinded to the patient's study group. Primary outcomes were death or all-cause readmission and heart failure-related quality of life at the end of 12 months. Data on hospitalization dates were obtained from the patient and confirmed by medical chart review. All admissions, regardless of hospital or cause, were counted. For exploratory analyses, we classified reason for admission as cardiac or non-cardiac. Cardiac admissions included those primarily for heart failure (e.g., shortness of breath and edema relieved by diuresis) and other cardiac causes such as chest pain, arrhythmias, or syncope. Cause of admission was determined by chart review by one of the authors (D.D.) who was blinded to treatment allocation. Heart failure-related quality of life was assessed using a modified version of the Minnesota Living with Heart Failure Questionnaire (MLHF). The MLHF is a 21 question instrument with a 6-point Likert scale response option and scores ranging from 0 to 105 [19]. In pilot testing of the MLHF, we found that low literacy patients had trouble answering questions with the standard 6-point Likert scale [15], so we changed the response scale to 4 points, using a visual display with stars to represent increasing severity. The 4-point Likert scale was scored as 0, 1, 3, and 5 to approximate standard scores on the MLHF. Secondary measures included heart failure self-efficacy, knowledge, and behaviors. Self-efficacy was measured with an 8 item scale developed for the behaviors needed in this trial as suggested by self-efficacy scale developers [20]. Respondents used a 4-point Likert scale yielding a total score from 0–24. We assessed heart failure knowledge using a knowledge test previously developed for this population [15], Heart failure self-management behavior was assessed by asking patients how often they weighed themselves. We used patient self-report and the medical chart to measure age, gender, race, insurance status, income, years of education, medication use, years with heart failure, and the presence of co-morbidities. We measured literacy using the Short Test of Functional Health Literacy in Adults (S-TOFHLA) [21], a well-validated scale that correlates well with other measures of reading ability [22]. Patients who scored in the inadequate literacy range on the S-TOFHLA were considered to have \"low literacy.\" The cut-point for inadequate literacy is roughly analogous to the 4th grade reading level.\n\nBODY.METHODS.SAMPLE SIZE:\nSample size was based on pilot study results showing a 9-point improvement in MLHF scores over 3-months with the intervention [15]. Detecting a 9-point difference between intervention and control group with 80% power and alpha set at 0.05 required 70 patients per group. We aimed to enroll 150 patients to account for possible attrition, but stopped at 127 because funding ended. We did not power this study to detect differences in hospitalization, but studies with even smaller numbers of patients have shown a difference for that outcome [7].\n\nBODY.METHODS.STATISTICAL METHODS:\nPatients who did not return any phone calls and did not return for follow-up assessment did not have outcome data for analysis. Patients who withdrew from the study were censored at the time of withdrawal; any data collected prior to withdrawal were included in the analysis. Baseline differences between groups were assessed using t-tests for continuous outcomes and chi-squared tests for categorical outcomes. For MLHF, heart failure self-efficacy and heart failure knowledge, we used two-sample t-tests. Non-parametric tests were also performed for all comparisons, but results did not differ from the parametric tests. Because of the small sample size and the unequal distribution of baseline characteristics, we adjusted for baseline differences using linear regression. Analyses of self-reported outcomes, such as MLHF, excluded patients who died or withdrew from the study before 6 or 12 month data was collected. For hospitalization or death, we used negative binomial regression and censored patients at the time of death or withdrawal from the study. Based on the likelihood ratio test, negative binomial regression was a better fit for the data than a Poisson regression. Additionally, the Vuong test confirmed that a zero-inflated model was inappropriate [23]. Because of uneven distribution of baseline characteristics, we performed analysis of covariance with negative binomial regression to control for baseline differences [24]. We identified the following variables that could contribute to hospitalization or death based on previous studies: age, race, gender, literacy level, hypertension, diabetes, creatinine, MLHF score, use of β-blockers, angiotensin converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), use of digoxin, and presence of systolic dysfunction [7,25]. Variables were not included in the model if the relationship between the variable and the study group or outcome had a p value greater than 0.3. We started with a model including the following items to arrive at the best point estimate: age, gender, hypertension, creatinine, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs. We then eliminated variables from the model if p > 0.30, and if the point estimate remained within 10% of the initial estimate. We prespecified a subgroup analysis in patients with low literacy to analyze if the intervention had a similar effect. The same analysis described above was repeated for the subgroups of patients with low literacy and those with higher literacy. The initial multivariate model for the subgroups analysis included: age, gender, hypertension, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs.\n\nBODY.METHODS.ROLE OF THE FUNDING SOURCE:\nThe funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.\n\nBODY.RESULTS.PATIENTS:\nWe screened 919 patients on furosemide between November 2001 and April 2003. 127 met eligibility criteria and agreed to participate (Figure 2). Of those not enrolled, 407 did not have heart failure according to their physician, 367 did not meet eligibility criteria and 27 declined to participate. Of those randomized to the control group, 1 never returned after the first visit, 1 withdrew during the study and 4 died during the study. Follow-up was completed for all of the remaining participants (98%) (Figure 3). Of those randomized to the intervention group, 3 never returned after the first visit, 4 withdrew during the study and 3 died during the study. Follow-up was completed for all of the remaining participants (93%). At baseline, most characteristics were similar between the two groups (Table 1). However, the control group had more participants with hypertension, fewer with diabetes, and fewer men. Of heart failure related characteristics, the control group had more participants with systolic dysfunction, and taking β-blockers. The intervention group had more participants taking ACE inhibitors or ARBs, and digoxin. Regardless of these differences, none were statistically significant. The control group did have statistically significantly higher baseline MLHF scores representing worse symptoms at baseline.\n\nBODY.RESULTS.HOSPITALIZATION OR DEATH:\nThere were 68 hospitalizations (65) or deaths (3) in the intervention group and 111 (107 hospitalizations, 4 deaths) in the control group. The crude all-cause hospital admission or death incidence rate ratio (IRR) was 0.69 (95% CI 0.40, 1.19). After adjusting for age, gender, use of ACE inhibitor or ARB, use of a β-blocker, presence of hypertension, and baseline MLHF, intervention patients were less likely to have the outcome (IRR = 0.53; 95% CI 0.32, 0.89). 61% of patients in the control group had at least one hospitalization or died, and 42% of patients in the intervention group had at least 1 hospitalization or died (p = 0.13).\n\nBODY.RESULTS.CARDIAC HOSPITALIZATION:\n39% of patients in the control group and 34% of patients in the intervention group had at least one hospitalization for cardiac causes (p = 0.55). The unadjusted IRR was 0.79 (95% CI 0.42, 1.5). After adjusting for baseline differences, the IRR was 0.85 (95% CI 0.44, 1.7).\n\nBODY.RESULTS.HEART FAILURE-RELATED QUALITY OF LIFE:\nIn unadjusted analysis, the control group, on average, improved 5 points on the MLHF and the intervention group improved by 1 point. The difference was not statistically significant (3.5 points, 95% CI 11, -4, p = 0.36). After adjusting for baseline differences between the groups, the difference was 2 points (95% CI 9, -5, p = 0.59) suggesting no effect on heart failure-related quality of life.\n\nBODY.RESULTS.OTHER OUTCOMES.KNOWLEDGE:\nHeart failure related knowledge improved more in the intervention group than in the control group. Mean difference in score improvement was 12 percentage points (95% CI 6, 18; p < 0.001).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-EFFICACY:\nHeart failure self-efficacy improved more in the intervention group than in the control group. Mean difference in score improvement was 2 points (95% CI 0.7, 3.1; p = 0.0026).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-CARE BEHAVIOR:\nSignificantly more patients in the intervention group than in the control group reported daily weight measurement at 12 months (79% vs. 29%, p < 0.001).\n\nBODY.RESULTS.SUBGROUP ANALYSES ACCORDING TO LITERACY:\nTwenty-four patients in each group had inadequate literacy based on the S-TOFHLA (Table 2). Among these patients, there was no difference in quality of life score in unadjusted and adjusted analyses (difference = -1.6; 95% CI -15, 12); p = 0.81). For the combined outcome of hospitalization or death, the unadjusted IRR was 0.77 (95% CI 0.30, 1.94). After adjusting for baseline differences, the IRR was 0.39 (95% CI 0.16, 0.91). Seventy-five patients had marginal or adequate literacy based on the S-TOFHLA. We found no difference in quality of life score in unadjusted and adjusted analyses (difference = -4.2; 95% CI -14, 6; p = 0.40). Among patients in the higher literacy group, the unadjusted IRR for hospitalization or death was 0.65 (95% CI 0.33, 1.3). After adjusting for baseline differences, the IRR was 0.56 (95% CI 0.30, 1.04). We did not find a statistically significant effect modification between literacy and the intervention.\n\nBODY.DISCUSSION:\nA heart failure self-management program designed for patients with low literacy reduced the rate of the combined endpoint of hospitalization or death. The prespecified subgroup analyses suggest that patients with low literacy benefited as much from the intervention as the patients with higher literacy. The success of our intervention reflects the goals of our program. We designed an easy-to-read and use educational booklet and self-management plan, and focused on overcoming barriers to learning self-management [15]. Our intervention was founded on teaching self-management. We focused on helping patients understand signs and symptoms of worsening heart failure and perform self-adjustment of diuretics based on weight fluctuation. Many care providers would not attempt to teach patients, particularly those with low literacy, how to self-adjust their diuretic medication. We found that, with careful teaching, many patients incorporated this strategy into their daily routine successfully, as demonstrated by improved self-care behaviors. Teaching self-adjustment of diuretics, rather than the conventional teaching to call the care provider if weight fluctuates, empowers patients to take more control over their illness. Self-adjustment of diuretic dose is a prominent aspect of the self-management training we provided to the intervention patients. Other programs to improve patient self-management have not been explicit in teaching patients to self-adjust their diuretic dose based on weight fluctuation. Although our outcomes are comparable to others', using this approach puts more control into the hands of the patient. Furthermore, our intervention appears effective among patients with low literacy skills, a group often overlooked for empowering interventions. Our study adds to the growing literature on disease management programs for patients with heart failure [6], particularly those that focus on self-management training [7-10]. Studies focusing on self-management training have demonstrated comparable improvements in hospitalization rates to more comprehensive programs that aim to improve the quality of pharmaceutical prescribing, provide home visits, and take place in specialized cardiology clinics [6]. Such comprehensive programs have also been shown to reduce mortality, but self-management programs have not [6]. We did not detect any difference in heart failure related quality of life which was the outcome we powered our study to detect. Other self-management studies that have found improved quality of life have enrolled patients during a heart failure hospitalization [8,9]; however, we enrolled patients in the outpatient setting while they were clinically stable. Improving quality of life for stable outpatients may be more difficult for this type of intervention. We have previously reported that patients with diabetes and low literacy benefited more from a disease management intervention than those with higher literacy skills [26]. A similar result in two different chronic diseases substantiates the claim that appropriately designed disease management programs may have greater effect for low literacy or vulnerable populations, who are most at risk for adverse outcomes with usual care. This finding is particularly important in light of the recent study by DeBusk and colleagues that did not find a reduction in hospitalization with a well-designed comprehensive intervention [13]. The authors and an accompanying editorial [14] suggested that the failure to detect improvement may have occurred because the patients studied were less at-risk than in other studies. They called for more research to determine better ways of targeting disease management. We believe that low literacy is an important marker for vulnerability to adverse outcomes, and that disease management programs targeted to patients with low literacy may be an effective way of focusing resources on those most able to benefit. If patients with low literacy are to be preferentially recruited for such programs, innovative outreach and screening efforts will likely be required, as patients with low literacy may face particular barriers to accessing such care. This study should be interpreted in light of its limitations. Research assistants were not blind to group assignment during the assessment of self-reported outcomes. As such, patients in the intervention may have been more likely to inflate their responses in an effort to please the interviewer. This effect would tend to inflate patient responses to the subjective assessments of heart failure-related quality of life, self-efficacy, and self-care behaviors. The MLHF questionnaire was modified from its original form to make it easier for patients with low literacy to respond. This change in the scale may have changed its ability to detect important changes in heart failure related quality of life. Because the groups' mean scores were almost identical, we do not feel this limitation changed our results. In a similar vein, most questionnaires are not validated in low literacy populations, raising questions as to their ability to perform to the same standards. Our sample size was small, which did not allow for an even distribution of baseline variables among the groups. We controlled for baseline differences between groups in our analysis. While it is controversial whether or not to control for baseline differences in randomized controlled trials, some analysts have argued that doing so improves the power without introducing bias [24]. A larger, multi-site study would offer better control of confounders, better generalizability, and more power to determine differences in effect according to literacy. We did not collect data on the resources needed to implement this type of intervention in usual settings, and such a study and cost-effectiveness analysis would be helpful for most interventions of this type. We used health educators, not nurses or physicians, to deliver the intervention. By using less highly trained individuals to deliver the intervention, we enabled nurses and physicians to perform clinical tasks more commensurate with their training. Other studies that have performed global cost estimates have found that the savings from reductions in hospitalizations exceed the cost of the intervention [6].\n\nBODY.CONCLUSION:\nIn conclusion, our heart failure self-management program, designed for patients of all literacy levels, appears to reduce rates of hospitalization and death. Patients with low literacy, and other vulnerable patients, may stand to benefit most from these programs. Further research into the design, implementation, and dissemination of disease management programs for low literacy patients will be crucial for meeting the health care needs of the growing population of patients with chronic illness.\n\nBODY.COMPETING INTERESTS:\nDrs. DeWalt and Pignone have received honoraria and grants from Pfizer, Inc., Dr. Rothman has received grants from Pfizer, Inc., and Dr. Sueta is in their speakers bureau.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nDD conceived of the study, participated in its design and coordination, performed statistical analyses, interpretation of the data, and drafted the manuscript. RM, MB conceived of the study and participated in its coordination. MK, KC coordinated the study, and collected the data, RR, CS participated in study design and interpretation of the data. MP conceived of the study, participated in its design and coordination, and interpretation of the data. All authors reviewed the manuscript for important intellectual content and gave final approval.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to Standard information about self-care what was the result of Follow-up and thorough education on self-care on Baseline hypertension?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
258
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Impact and Process Evaluation of Integrated Community and Clinic-Based HIV-1 Control: A Cluster-Randomised Trial in Eastern Zimbabwe\n\n ABSTRACT.BACKGROUND:\nHIV-1 control in sub-Saharan Africa requires cost-effective and sustainable programmes that promote behaviour change and reduce cofactor sexually transmitted infections (STIs) at the population and individual levels.\n\nABSTRACT.METHODS AND FINDINGS:\nWe measured the feasibility of community-based peer education, free condom distribution, income-generating projects, and clinic-based STI treatment and counselling services and evaluated their impact on the incidence of HIV-1 measured over a 3-y period in a cluster-randomised controlled trial in eastern Zimbabwe. Analysis of primary outcomes was on an intention-to-treat basis. The income-generating projects proved impossible to implement in the prevailing economic climate. Despite greater programme activity and knowledge in the intervention communities, the incidence rate ratio of HIV-1 was 1.27 (95% confidence interval [CI] 0.92–1.75) compared to the control communities. No evidence was found for reduced incidence of self-reported STI symptoms or high-risk sexual behaviour in the intervention communities. Males who attended programme meetings had lower HIV-1 incidence (incidence rate ratio 0.48, 95% CI 0.24–0.98), and fewer men who attended programme meetings reported unprotected sex with casual partners (odds ratio 0.45, 95% CI 0.28–0.75). More male STI patients in the intervention communities reported cessation of symptoms (odds ratio 2.49, 95% CI 1.21–5.12).\n\nABSTRACT.CONCLUSIONS:\nIntegrated peer education, condom distribution, and syndromic STI management did not reduce population-level HIV-1 incidence in a declining epidemic, despite reducing HIV-1 incidence in the immediate male target group. Our results highlight the need to assess the community-level impact of interventions that are effective amongst targeted population sub-groups.\n\nBODY.INTRODUCTION:\nHIV-1–prevalence declines may now be occurring in some sub-Saharan African countries [1]. However, there remains little direct evidence that prevention measures—rather than natural HIV-1 epidemic dynamics [2] or behaviour change prompted by mortality [3]—have contributed to the slowing of HIV-1 epidemics [4,5]. Syndromic management of sexually transmitted infections (STIs) proved effective early in an HIV-1 epidemic in north-west Tanzania [6]. Peer education to promote safe behaviours showed promise in early process evaluations [7], but a randomised controlled trial (RCT) of factory workers in Harare, Zimbabwe, done in the mid-1990s, proved inconclusive [8]. Subsequent RCTs of syndromic management [9] and mass treatment of STIs [10], together with an information, education, and communication (IEC) behaviour-change programme [9], showed no effect in more mature epidemics. Integrated implementation of synergistic community-based HIV-1 control strategies could be a more cost-effective and sustainable approach to HIV-1 prevention than parallel application of vertical (top-down) programmes [11]. One scientific evaluation of such a strategy has been reported in which a combination of IEC activities amongst the general population and syndromic STI management showed no impact on HIV-1 incidence at the population level [9], although participation in the IEC activities was associated with reduced HIV-1 infection in women [12]. We conducted a cluster-RCT to test the hypothesis that integrated implementation of combined community- and clinic-based HIV-1 prevention, in which IEC activities focus primarily on high-risk populations, can be feasible and effective in reducing HIV-1 incidence in a major maturing epidemic in eastern Zimbabwe (Protocols S1 and S2; Text S1 and S2).\n\nBODY.METHODS.PARTICIPANTS AND RANDOMISATION PROCEDURE:\nThe study communities comprised six pairs of communities matched by socio-economic type—small town, tea/coffee estate, forestry plantation, roadside trading settlement, and subsistence farming area (two pairs) (Figure 1). Each community included at least one Government or Mission health centre. It was anticipated that HIV-1 incidence would be similar within each pair of communities. Within each pair, one community was assigned at random (un-blinded coin toss by a Ministry of Health official witnessed by programme and research personnel) to receive the additional intervention and the other to be the control. These procedures were designed to ensure that Mission, non-governmental organisation, and private sector programmes (for details, please refer to the following section) would be distributed evenly between intervention and control sites. Figure 1Location of Intervention and Control Communities in Manicaland Province, Eastern Zimbabwe We assessed the effect of the intervention using results from laboratory tests for HIV-1 infection and questionnaire data collected in the baseline and 3-y follow-up rounds of a population-based, closed-cohort survey. The 12 study communities were enumerated in a phased manner, with paired communities being enumerated consecutively to minimise the effects of any seasonal factors. HIV-1–prevention activities were commenced in each intervention community shortly after completion of the baseline survey in that community. In each community, individuals eligible for the study were identified in the first round using data from household listings prepared in an initial census. All males and females aged 17–54 y and 15–44 y at last birthday (the age groups expected to have the highest incidence of HIV infection), respectively, who had slept in a household in the community for at least four nights in the previous month, and who had also done so at the same time 1 y earlier, were considered eligible for the study. In heterosexually driven HIV-1 epidemics, risk of infection can be correlated amongst marital partners [13]. Therefore, to maximise statistical power to detect differences in HIV-1 incidence, enrolment was restricted to one randomly selected member per marital group.\n\nBODY.METHODS.INTERVENTIONS:\nIntervention and control communities were to receive standard Government services including basic syndromic STI management, condom distribution from health clinics and Zimbabwe National Family Planning Council outlets, home-based care, and limited HIV/AIDS–focussed IEC activities (e.g., occasional AIDS-awareness meetings and distribution of posters and leaflets). In addition, social marketing of male and female condoms would be provided through an ongoing national programme [14]. The intervention comprised targeted and population-level strategies to promote safer sexual behaviour and to improve treatment of STIs that facilitate HIV-1 transmission. The intervention strategies were implemented by two local non-governmental organisations (Family AIDS Caring Trust and the Biomedical Research and Training Institute) and the Zimbabwe Ministry of Health and Child Welfare through an integrated programme of community- and clinic-based activities. Integration of the individual programme components was achieved through the joint involvement of the participating agencies in the planning and implementation of activities and through the inclusion of biomedical and behavioural aspects within each component. The programme design comprised three key components: (1) peer education and condom distribution amongst commercial sex workers and male clients at workplaces and in the general community, supported by income-generating projects; (2) strengthened syndromic management of STI services at local health centres; and (3) open days with HIV/AIDS IEC activities at health centres to promote safer sexual behaviour and to increase the uptake of local STI treatment services. The peer-education component was based on a model which had been developed by the Project Support Group at the University of Zimbabwe [7] and which had been widely implemented within Zimbabwe and neighbouring countries. Activities were held weekly at workplaces and at locations within the general community (e.g., beer halls and markets) where casual relationships were most frequently formed [15]. The target population comprised sex workers and male clients who form a bridge population in HIV transmission [16] between sex workers and the monogamous (or serial monogamous) majority of women [17,18]. It was posited that the high HIV-1 incidence observed amongst young women could be reduced by altering the behaviour of their older male partners whose own behaviour was intrinsically more risky [19]. The behavioural component would be reinforced in counselling sessions with STI patients and through micro-credit income-generating projects to reduce unmarried women's dependence on commercial sex work. The micro-credit scheme consisted of small interest-free loans repayable over 10 mo, provided to groups and to individuals together with training in small-business management. The targeted activities would be extended to the general population through open days held at local health centres. Besides providing basic HIV/AIDS information, it was envisaged that programme meetings and activities, by their continuous nature, would sustain high levels of awareness of the risks of HIV transmission and would facilitate renegotiation of community social norms, making safer behaviours easier to adopt. The key messages of the programme were: (1) remain faithful to one regular sexual partner; (2) use condoms consistently with any casual sexual partners; and (3) seek prompt and effective treatment for any STIs. Syndromic management of STIs at primary healthcare centres was first introduced in Zimbabwe in the 1980s [20] and formed the basis of STI diagnosis and treatment services at baseline in the intervention and control communities. It was envisaged that these services could be strengthened and made more effective through a programme of regular classroom training and on-site supervision of nursing staff, through the introduction of training in systemic counselling for STI patients, and through the provision of small quantities of treatment drugs to cover delays in routine supplies. Quality-assurance procedures applied in the intervention communities included pre- and post-training tests for peer educators and, for nursing staff, attending the syndromic STI management and systemic counselling courses, regular on-site supervision (including random spot checks) and training, refresher courses, routine planning meetings and monitoring of service statistics, and quarterly workshops where detailed programme procedures were reviewed and updated. An interim qualitative process evaluation of intervention activities was conducted during the inter-survey period, and a report on the findings was provided to the implementing organisations.\n\nBODY.METHODS.OUTCOME AND PROCESS MEASURES:\nThe primary outcome of the study was HIV-1 incidence at the community level amongst individuals who were uninfected at baseline. Blood was collected onto Whatman No. 3 filter paper and transported to the Biomedical Research and Training Institute laboratory in Harare. Blood spots were air dried at 4 °C and, for long-term (>1 mo) storage, were kept at −20 °C. For baseline studies, blood was eluted into phosphate-buffered saline, and antibodies to HIV were detected using a dipstick dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, http://www.path.org; produced locally in Thailand]) and a standard protocol [21,22]. All positive results and a 10% sample of negative results were confirmed using a plate EIA (Abbott Third-Generation HIV-1/HIV-2 EIA [http://www.abbott.com] or Genelavia MIXT HIV-1/HIV-2 [Sanofi Diagnostics Pasteur, Marnes La Coquette, France]). At follow-up, a similar protocol was followed. Only the samples from those participants recorded as being HIV seronegative at baseline were tested at follow-up, again using a dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, produced locally in India]). Where seroconversion was indicated, the frozen stored baseline sample was retested to confirm the original negative result using the same dot EIA test. Where the baseline result remained negative, the Abbott EIA test was used to confirm both baseline and follow-up results. The change in place of manufacture of the dot EIA and the exclusive use of Abbott test kits to confirm positive sera at follow-up was due only to changes in the supply of test reagents, and not to perceived changes in sensitivity or specificity [23]. Apart from the principal investigators (based in Harare, London and Oxford) and those nurses given permission by participants requesting voluntary counselling and testing (VCT), all research personnel remained blind to the HIV-1 status of individual participants. Secondary outcomes, measured at the community and individual level, were self-reported genital ulcers and urethral or vaginal discharge in the past year (STI cases), STI treatment effectiveness (self-reported cessation of symptoms), indicators of sexual and health-seeking behaviour change, and HIV/AIDS knowledge. The behaviour-change variables assessed were sexual debut, sexual partner change in the past year, non-regular partnerships in the past month, and unprotected sex with regular and casual partners in the past 3 y. The data on sexual partnerships and condom use were collected using the Informal Confidential Voting Interview method for 75% of respondents selected at random in the first round of the survey. This method includes procedures to build rapport, ensure a non-judgemental interview approach, and provide reassurance that there are no right or wrong answers to questions of a personal nature, and uses a simple secret voting-box system to reduce embarrassment and guarantee confidentiality in low-development settings [18]. Its use has been shown to be associated with greater disclosure of socially proscribed behaviour in the study population [24]. Process indicators examined comprised changes in knowledge and psychosocial status and indicators of programme coverage and quality.\n\nBODY.METHODS.SAMPLE-SIZE CALCULATIONS:\nInitial sample-size calculations assumed 20% HIV-1 prevalence at baseline, 30% loss to follow-up after 2 y, and 80% power to detect a 40% reduction in HIV-1 incidence in the intervention communities compared with control communities, assuming a background yearly incidence of 2%. Based on six pairs of communities and a co-efficient of variation between communities of 0.15, the required sample size in each community was 1,000. Funding constraints and slower than anticipated implementation of intervention activities led to revisions of the sample size for each community to 800 and the length of follow-up to 3 y, respectively. Assuming a proportionate increase in loss to follow-up to 41%, these arrangements also yielded 80% power to detect a 40% reduction in HIV-1 incidence.\n\nBODY.METHODS.STATISTICAL METHODS:\nTo test the randomisation with small numbers of communities, HIV-1 prevalence, STI history, and socio-demographic characteristics were compared at baseline for study participants in the intervention and control communities, together with uptake of STI treatment and VCT services offered at baseline. Outcome and process indicators were compared for intervention versus control communities. Analysis of the primary outcome was on an intention-to-treat basis. Incident events and person-years at risk of seroconversion were used to calculate HIV-1 incidence rates and unadjusted and adjusted incidence rate ratios (IRR) with 95% confidence intervals (CIs) for each pair of communities. Adjustment was made for sex, 3-y age group, and community-level baseline HIV prevalence. The overall IRRs (unadjusted and adjusted) were taken to be the geometric means of the IRRs for the six pairs of communities. We calculated 95% CIs for each geometric mean as geometric mean ± 1.96 × standard error of the geometric mean. Paired student t-tests on the logarithms of the pair-specific IRRs were used to test whether these differed significantly from unity [25]. The coefficient of variation between communities was calculated based on baseline HIV prevalence using a standard procedure for pair-matched studies [26]. Analyses of prevalence for secondary outcome and process variables were conducted separately for male and female respondents seen at both survey rounds by fitting logistic regression models to the individual-level data and adjusting for community pair and, where available, value of variable at baseline. Since most programme activities were targeted and overall coverage of programme activities was therefore limited, sub-group analyses, adjusted for community pair, were done for HIV-1 incidence and behavioural outcomes to assess the individual-level effects of attendance at programme meetings. Data were entered and validated using SPSS-PC (http://calcnet.mth.cmich.edu/org/spss/index.htm) and data analysis was conducted in Stata version 7 (http://www.stata.com). Statistical tests were double-sided and results were taken to be significant at the 5% level.\n\nBODY.METHODS.ETHICAL APPROVAL:\nAll study participants in the intervention and control communities were offered free VCT for HIV-1, an information sheet on HIV/AIDS, results from a diagnostic test for Trichomonas vaginalis [27] (done at baseline only), and free treatment for T. vaginalis and other STIs from a research nurse. Testing and treatment for T. vaginalis was provided because the prevalence of other curable STIs was low in the study areas [22]. Antibodies reactive with T. vaginalis were detected in DBS eluates following a previously described procedure [27,28]. Written informed consent was sought as a condition of enrolment and continuation in the study. Prior ethical approval was obtained from the Research Council of Zimbabwe, number 02187; the Applied and Qualitative Research Ethics Committee in Oxford, United Kingdom, N97.039; and the UNAIDS Research Ethics Committee, ERC 98/03.\n\nBODY.RESULTS.PARTICIPANT FLOW:\nIn round 1 of the census (July 1998 to February 2000), 5,943 and 6,037 eligible individuals in the intervention (total population size 18,104) and control (18,633) communities, respectively, were selected for recruitment into the study cohort (Figure 2). In round 2, 3 y later (July 2001 to February 2003), 1,044 (23%) and 1,144 (26%) of baseline respondents who were still alive had migrated away from the intervention and control communities, respectively, and were therefore lost to follow-up (Figure 2). At both baseline and follow-up, migrants and non-migrants had similar risks of HIV-1 infection and associated behaviour [29]. Of those still resident in the intervention and control communities, 2,664 (75%) and 2,564 (77%), respectively, were interviewed and blood samples taken for a second time. Temporary absence from the usual place of residence was the main reason for non-participation in the intervention (n = 794, 95%) and control (n = 698, 94%) communities. The overall proportions of baseline respondents followed up at the end of the study were 55% and 56% in the intervention and control communities, respectively. The median follow-up of communities was 3.0 y (range of median within communities, 3.0–3.1). Figure 2Flow-Chart Comparing Participation and Follow-Up Rates in the Intervention and Control CommunitiesIndividuals enrolled in round 1 and still resident in the study communities were considered eligible for participation in round 2.\n\nBODY.RESULTS.BASELINE DATA:\nHIV-1 prevalence was higher in the intervention communities than in the control communities (24% versus 21%, risk ratio 1.13 [95% CI 1.05–1.22], p = 0.001). T. vaginalis infection, secondary school education, and spatial mobility were more common in the control communities, whilst history of genital discharge and uptake of STI treatment and VCT services offered in the survey were low overall but more frequent in the intervention communities (Table 1). However, the differences in each case were small and were unlikely to be clinically meaningful. Table 1 Baseline Characteristics of the Study Populations \n\nBODY.RESULTS.OUTCOMES AND ESTIMATION:\nMedian follow-up per person was 2.9 y (range 1.4–3.9) and 3.0 y (range 1.5–4.1) in the intervention and control communities, respectively. In total, 4,052 individuals had 212 incident events of HIV-1 during 12,009 person-years at risk, giving an HIV-1 incidence rate of 1.77 per 100 person-years at risk. HIV-1 incidence was higher in communities with higher baseline HIV prevalence (IRR 11.49 [95% CI 1.80–73.40], p = 0.010), but this difference disappeared after adjustment for stratification by community type (p = 0.8). HIV-1 incidence was higher in the intervention communities than in the control communities overall, and in each community type, except in the forestry plantations where it was almost identical (Table 2). The difference was not significant after adjustment for sex, age group, and baseline HIV prevalence (IRR 1.27 [95% CI 0.92–1.75], p = 0.012). The observed coefficient of between-community variation was 0.14. Table 2 HIV Prevalence at Baseline and HIV Incidence and IRRs for Intervention Versus Control Communities Looking at outcome indicators for community members (rather than for communities—the unit of randomisation), self-reported STI symptoms were similar in both sets of communities (Table 3). Treatment for STI symptoms in males was effective more frequently in the intervention communities, with men in the intervention community in five of the six matched pairs reporting reduced symptom recurrence. However, more young women in the intervention than in the control communities had started sex, and reports of unprotected sex with a casual partner in the study period were more common in the intervention communities. No differences were observed in consistent condom use with regular partners between the two sets of communities. In the intervention communities, knowledge about HIV/AIDS was enhanced amongst men, and more respondents reported a close relative or family member with AIDS (sex- and age-adjusted prevalence odds ratio 1.22 [95% CI 1.05–1.42], p = 0.009). Slightly more women in the intervention communities reported that condom use within marriage was becoming acceptable, but a greater proportion of men agreed with the statement that \"condoms reduce the pleasure of sex\". Table 3 Biomedical, Sexual Behaviour, and Psychological Outcomes at Follow-up by Residence in the Intervention and Control Communities A total of 63,261 peer-education meetings were held, and 6.8 million condoms were distributed by the programme in the intervention communities (Table 4). Outputs increased over time as new communities entered the programme. However, owing to high inflation and economic decline, the micro-credit income-generating projects proved impossible to implement. We were able to obtain data on STI episodes treated at clinics in the 11 out of 12 study communities that reported cases to the administrative districts of Mutasa and Makoni. In the three intervention communities each in Mutasa and Makoni, STI cases fell by 66% and 51%, respectively, over the 3-y study period. Similar declines of 67% and 52% occurred at clinics in the four control communities in Mutasa and the one control community in Makoni. Coverage of training in syndromic STI management and systemic counselling for nursing staff was high (Table 4). Table 4 Summary of Service Statistics on Programme Output Most of the activities were targeted at high-risk groups. In the general population sample interviewed in the follow-up survey, 1,779 (35%) and 647 (13%) of 5,098 respondents reported attending an HIV/AIDS meeting and a programme meeting, respectively (Table 5). More respondents in the intervention communities than in the control communities attended an HIV/AIDS meeting (41% versus 28%, prevalence rate ratio 1.44 [95% CI 1.33–1.56], p < 0.001) and a programme meeting (20% versus 5%, 4.27 [95% CI 3.52–5.17], p < 0.001), and participation was higher among men than women (prevalence rate ratio 1.32 [95% CI 1.14–1.53], p = 0.002). Fewer women in the intervention communities had heard about HIV/AIDS from external sources or believed that STI drugs were available at their local clinics. Sixty-two (2%) out of 2,528 respondents in the control communities reported spending at least 1 d in the past month in the intervention communities; the equivalent number for respondents in the intervention communities visiting control communities was 70 (3%) out of 2,683. Table 5 Intervention Coverage \n\nBODY.RESULTS.ANCILLARY ANALYSES:\nIn exploratory analysis to assess where the intervention failed, we found that HIV-1 incidence was reduced in males (IRR 0.48 [95% CI 0.24–0.98], p = 0.044) who reported attending programme meetings, after adjustment for the targeting of activities to groups with high-risk behaviour (Table 6). Amongst men who reported one or more casual sexual partners in the past 3 y, fewer of those who attended meetings reported unprotected sex with these partners (prevalence odds ratio 0.45 [95% CI 0.27–0.75], p = 0.002). HIV-1 incidence was not associated with programme participation in women. Table 6 HIV Incidence and IRRs by Meeting Attendance and Sex \n\nBODY.DISCUSSION.INTERPRETATION:\nWe conducted a scientific trial of the feasibility and impact of an integrated community- and clinic-based HIV-1–prevention intervention. The income-generating projects apart, the intervention activities were feasible. The outputs of the programme were extensive with more than 63,000 meetings being conducted and almost 7 million condoms distributed by trained peer educators. Programme messages were considered relevant and realistic. Local STI treatment and counselling services were strengthened and promoted in accordance with the intervention protocol. For male participants, these activities improved HIV/AIDS knowledge, increased the effectiveness of STI treatment, increased consistent condom use with casual partners, and reduced HIV-1 incidence. However, the cluster-RCT results clearly show that the intervention had no positive impact at the community level and suggest possible detrimental effects on the onset of female sexual activity and condom use with casual partners over a 3-y timeframe. Did the cluster-RCT design fail to capture the true effect of the intervention? There are three possibilities: (1) inadequate statistical power; (2) insufficient follow-up; and (3) contamination of intervention within control communities. The study design provided adequate statistical power to detect a meaningful average reduction (40%) in HIV-1 incidence in the intervention versus the control communities over a 3-y observation period. In hindsight, an effect size of 40% was too optimistic and the study had insufficient power to detect a smaller effect. However, there was no trend in the results towards reduced HIV-1 incidence in the intervention communities. Largely due to migration, attrition was close to that anticipated in the study design and was comparable to other recent cohort studies [6,10,9,30]. Migrants had similar characteristics and sexual behaviour to non-migrants [29]. The results of the exploratory sub-group analysis generate the hypothesis that high-risk behaviour was reduced in males attending programme meetings but did not translate into a wider impact on HIV-1 incidence at the population level. Changes in core and bridge populations may take more time to reflect in the general population than was observed in the trial. However, a longer period of follow-up would have increased attrition, and the finding of a possible adverse effect at the population level meant that it would not have been ethical to continue with the same intervention. Future trials of behaviour-change interventions may need to include multiple rounds with phased recruitment and (where interim results are favourable) may need to consider phased intervention implementation. We minimised intervention contamination by selecting physically separated study communities, and movements between intervention and control communities were rare. However, a similar peer-education programme was implemented in one control community (small town), and HIV-1–prevention activity was considerable in all control communities that also had greater access to information from external sources. In some cases, programme messages (e.g., promotion of condom use) conflicted with those of other agencies working in the intervention communities. The effects of these other programmes could have limited our ability to detect a reduction in HIV-1 incidence caused by the current intervention. The absence of an observed effect of the intervention was not explained by differences in HIV-1 prevalence, sexual behaviour, STI cofactors, mobility, or socio-demographic composition at baseline. The earlier sexual debut in females and greater unprotected sex with casual partners seen in the intervention communities during the study period were not present at baseline but could reflect increased willingness to report high-risk behaviours in settings where there was more open discourse about HIV and AIDS. The peer-education programme could have had some effect for male but not for unmarried female participants. Preliminary findings from subsequent qualitative investigations indicate that, in the predominantly rural communities in which the study was conducted, poverty and the associated failure of income-generating projects meant that some peer educators were unable to maintain safer behaviours. Given their increased visibility within the community—intended to enhance their status and self-esteem and, thus, to reinforce their commitment to and role as models for behaviour change—they may, inadvertently, have served as negative role models and, thereby, may have contributed to the greater female early-age sexual activity. Free distribution of condoms by women still engaging in unprotected commercial sex led to their being poorly valued and reinforced their association with promiscuity.\n\nBODY.DISCUSSION.GENERALISABILITY OF FINDINGS:\nEpidemiological context can affect the impact of interventions [31], and structural obstacles can limit the pace and extent to which activities are implemented and the quality of these activities [32]. The HIV-1 epidemic stabilised in eastern Zimbabwe during the study period, with HIV-1 prevalence declining by 40%–50% in young adults [23]. This decline was accompanied by delayed sexual debut, reduced sexual partner change, and consistent condom use with casual partners [33,23]. Prevalence of syphilis, gonorrhoea, and Chlamydia is low, but non-curable herpes simplex virus type 2 remains common [22]. Risk reduction makes transmission more fragile, and an intervention could have a larger effect when set against secular behavioural changes [2]. Mathematical model simulations suggest that there would also be a greater chance of detecting a significant effect of the intervention even though there would be fewer seroconversions to power the calculation [34,35]. Structural obstacles to intervention implementation included HIV/AIDS mortality which disrupted the programme by claiming the lives of two programme coordinators and several of the nursing staff and peer-educators. Economic decline made the income-generating projects unfeasible and reduced the effectiveness of other components of the intervention. We believe that the coverage of the peer-education programme was satisfactory, given the focus on highly sexually active individuals. Meeting coverage could have been under-estimated in the survey since one-to-one discussions and activities at beer halls and other public places may not have been recognised as meetings by those present. However, the high level of spatial mobility limited the number of people who were reached at the required level of intensity and consistency, whilst national shortages of foreign currency restricted fuel and drug supplies, hampered attempts to extend community activities into the more remote rural areas, and disrupted the STI treatment programme in both the intervention and control communities. The intervention that we evaluated could have greater effect where an HIV-1 epidemic is younger, HIV-1 incidence is greater, local sexual networks are less diffuse, background STI control is weak, herpes simplex virus type 2 is less common, population mobility is lower, and/or the socio-economic climate is stable. We cannot rule out an effect of peer education in the urban intervention community since similar activities were implemented in the control community. Targeted peer education may work better in towns where bar-based sex work is more extensive. The absence of reduced HIV-1 incidence in farming estates reinforces doubts raised by the Harare factory workers study [8] concerning the efficacy of workplace peer education.\n\nBODY.DISCUSSION.OVERALL EVIDENCE:\nThese findings are important since the strategies evaluated—i.e., peer education, condom distribution, and syndromic STI management—are theory-based, have the potential for independent effects [11], and are widely promoted [36,37]. Syndromic STI management was effective in a nascent epidemic [6]. However, our disappointing findings echo those from recent trials [9,12] and emphasise the need for alternative strategies of behaviour-change promotion. Social marketing of condoms [14], larger poverty-alleviation programmes to reduce women's reliance on sex work—based on skills training and careful market research rather than on small-scale income-generating projects—and strategies which reach beyond high-activity core groups, such as the Popular Opinion Leader programme [38,39], and client-centred counselling [40], could be more viable and effective in reducing HIV-1 transmission in rural areas. Given the necessary economic conditions, unmarried women may still play a useful role in bar-based programmes since beer halls remain foci for high-risk behaviour [41,15].\n\nBODY.SUPPORTING INFORMATION.TRIAL REGISTRATION:\nThis trial has the registration number ISRNCT00390949 in the International Standard Randomized Controlled Trial Number Register. Found at: http://www.clinicaltrials.gov/ct/show/NCT00390949?order=1\n Protocol S1Protocol(35 KB DOC)Click here for additional data file. Protocol S2Revisions to Protocol(35 KB DOC)Click here for additional data file. Text S1CONSORT Checklist(48 KB DOC)Click here for additional data file. Text S2Ethical Approval, Information Letter, and Consent Forms(2.8 MB PDF)Click here for additional data file.\n\n**Question:** Compared to Baseline communities that received standard Government services what was the result of Baseline communities that received additional education and preventive measures on Intake of STI treatment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
441
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A randomized trial to monitor the efficacy and effectiveness by QT-NASBA of artemether-lumefantrine versus dihydroartemisinin-piperaquine for treatment and transmission control of uncomplicated \n\n ABSTRACT.BACKGROUND:\nMany countries have implemented artemisinin-based combination therapy (ACT) for the first-line treatment of malaria. Although many studies have been performed on efficacy and tolerability of the combination arthemeter-lumefantrine (AL) or dihydroartemisinin-piperaquine (DP), less is known of the effect of these drugs on gametocyte development, which is an important issue in malaria control.\n\nABSTRACT.METHODS AND RESULTS:\nIn this two-arm randomized controlled trial, 146 children were treated with either AL or DP. Both groups received directly observed therapy and were followed for 28 days after treatment. Blood samples were analysed with microscopy and NASBA. In comparison with microscopy NASBA detected much more gametocyte positive individuals. Moreover, NASBA showed a significant difference in gametocyte clearance in favour of AL compared to DP. The decline of parasitaemia was slower and persistence or development of gametocytes was significantly higher and longer at day 3, 7 and 14 in the DP group but after 28 days no difference could be observed between both treatment arms.\n\nABSTRACT.CONCLUSION:\nAlthough practical considerations could favour the use of one drug over another, the effect on gametocytogenesis should also be taken into account and studied further using molecular tools like NASBA. This also applies when a new drug is introduced.\n\nABSTRACT.TRIAL REGISTRATION:\nCurrent controlled trials ISRCTN36463274\n\nBODY.BACKGROUND:\nIn response to widespread resistance of Plasmodium falciparum parasites to the commonly used drugs chloroquine (CQ) and sulphadoxine-pyrimethamine (SP), many African countries recently adopted artemisinin-based combination therapy (ACT) as first-line treatment for uncomplicated malaria. The combination artemether-lumefantrine (AL) proved to be highly effective and well-tolerated in several studies in Africa [1-5]. Disadvantages of this drug combination are the twice-daily dosing and the fact that it should be administered with a fat-rich meal [6-8] or at least a cup of soya milk [2]. In Uganda, in an area of intense malaria transmission, recurrence of parasitaemia within 28 days occurred in 29% of AL treated patients, in 8.9% adjusted by genotyping, indicating recrudescence [4]. Another ACT, dihydroartemisinin combined with piperaquine (DP), which was originally developed in China, is increasingly used in Southeast Asia [9,10]. Piperaquine is an orally active bisquinoline with a half-life elimination time of 2.5–4 weeks [9]. The drug is structurally related to CQ, but still active against highly CQ-resistant P. falciparum strains [1,11]. This relatively inexpensive drug was well tolerated and highly effective in Southeast Asia [1,7,11-13] as was the case in two studies in Africa [4,14]. Consequently, both AL and DP are considered to be amongst the most promising artimisinin-based drugs [4,14]. Artemisinin-based drugs also act on gametocytes and thus on transmission, at least in low transmission areas [15-17]. In high transmission areas of Africa, not much information is yet available on gametocytaemia after ACT treatments and on possible influence on transmission. In a comparative study of AL and DP in Uganda, the appearance of gametocytes in those who did not have gametocytes at the start of treatment was lower from day 15 to day 42 of follow up in those treated with DP than in those treated with AL [4]. A limitation of this study was the fact that gametocytaemia was assessed by microscopic examination only. It has recently been shown that sub-microscopic gametocyte densities may significantly contribute to the transmission of malaria [18]. Adequate assessment of gametocytaemia is important. Quantitative nucleic sequence based amplification technique (QT-NASBA) has been shown to be much more sensitive for the detection of gametocytes than microscopy [3,18-20]. In this study, the post-treatment prevalence of gametocytes in children in Mbita, western Kenya was assessed, after treatment with AL and DP. Microscopy and QT-NASBA for the quantification of gametocytaemia were compared and effectiveness of both drugs regarding clinical symptoms, clearance of parasites and tolerability was assessed.\n\nBODY.MATERIALS AND METHODS.STUDY SITE AND POPULATION:\nThe study was conducted in Mbita, western Kenya, at the shores of Lake Victoria during the high malaria transmission season of April-July 2007. Mbita, is an area with highly variable transmission that depends on the local environmental circumstances that can support mosquito conditions. The EIR is calculated to be 6 infectious bites per person per month [21]. Children (6 months-12 years of age) visiting the out-patient clinic of the health centre and diagnosed with uncomplicated malaria were included after informed consent from parents or guardians. Inclusion criteria were: uncomplicated P. falciparum malaria with initial parasitaemia between 1,000 and < 200,000 parasites/μl blood, axillary temperature ≥ 37.5°C (measured with a digital thermometer) or a history of fever. Children with severe malaria, mixed infection or other underlying illness were excluded from the study. In total 146 children were recruited for the present study. Ethical approval for this study was obtained from appropriate local authorities and The Kenya Medical Research Institute (KEMRI, Nairobi, Kenya) Ethical Steering Committee (SSC protocol No 948). The trial was registered as an International Standard Randomized Controlled Trial at current controlled trials (ISRCTN36463274).\n\nBODY.MATERIALS AND METHODS.STUDY DESIGN AND TREATMENT:\nFollowing diagnosis (at day 0), the patients were randomly allocated to one of the two treatment groups following a computer generated randomization list. One group was assigned DP (Sigma-Tau, Italy) once per day for three days. One tablet of the study drug contained 20 mg of dihydroartemisinin and 160 mg of piperaquine (paediatric formulation). Treatment was according to body weight as follows: children between 4–7 kg received half a tablet per dose, those between 7–13 kg 1 tablet, 13–24 kg 2 tablets per dose and children between 24–35 kg 4 tablets. The other group was assigned to AL (Novartis Pharma, Switzerland). Each tablet contained 20 mg artemether and 120 mg lumefantrine. Patients received treatment according to bodyweight; i.e. children between 5–14 kg received one tablet per dose, those between 15–24 kg two tablets and those between 25–34 kg received three tablets per dose. Doses were given twice daily. All treatments were given with a glass of milk under direct supervision at the clinic or, for the 2nd dose of AL, at home.\n\nBODY.MATERIALS AND METHODS.OUTCOMES: EFFICACY:\nEfficacy was assessed using the WHO in vivo test with a follow-up period of 28 days [22]. At enrollment (day 0) a full clinical examination was performed; information was recorded on a case record form. At initial diagnosis (day 0) and during follow-up (day 1, 2, 3, 7, 14, and 28), finger prick blood samples were collected for microscopy, measurement of haemoglobin level and molecular analysis. Haemoglobin was measured with Hemocue 201+ analyser and cuvettes (HemoCue diagnostics B.V. Waarle, The Netherlands). Response to treatment was measured and defined according to WHO guidelines [22]. Patients showing complications or treatment failure were treated with appropriate supportive therapy. Children developing danger signs or severe malaria on day 1 or 2 of the study were withdrawn from the study, referred to the hospital, and given alternative treatment. Adverse events were recorded on the case record forms. An AE was defined as an unfavourable and unintended symptom, sign or disease. A serious adverse event (SAE) was defined as a symptom or sign that is temporally associated with the drugs administered to the patient that is life threatening or results in hospitalization, permanent and significant disability or death. SAE's were immediately reported to the ethical committee of KEMRI and the drug safety department of Sigma-Tau, Italy.\n\nBODY.MATERIALS AND METHODS.OUTCOMES: PARASITE CLEARANCE AND GAMETOCYTE DYNAMICS:\nParasite clearance and gametocyte dynamics were assessed microscopically as well as with quantitative nucleic acid sequence based amplification assay, QT-NASBA.\n\nBODY.MATERIALS AND METHODS.MICROSCOPY:\nGiemsa-stained thick and thin smears were prepared according to WHO guidelines. Two independent experienced microscopists, who were blinded to the treatment and clinical status of the patient, examined the smears for the presence of parasites and identified the observed parasite species. Parasitaemia was determined by counting the number of parasites against 200 leukocytes for the asexual stages (assuming that there are 8,000 leukocytes/μl blood). The presence of gametocytes was examined against 500 leukocytes.\n\nBODY.MATERIALS AND METHODS.QT-NASBA:\nFinger prick blood (50 ul) for NASBA analysis was collected on Whatman 903 filter paper (Whatman international Ltd. Maidston, United Kingdom) and air-dried at room temperature. Nucleic acid extraction was performed as previously described by Boom et al [23]. Real-time 18S rRNA QT-NASBA was applied to study asexual parasite clearance below microscopical threshold [20]. In order to quantify the number of parasites in blood, a 10 fold serial dilution of 106 to 10 in vitro cultured parasites/ml was used as reference and processed and analysed with NASBA. Furthermore, to assess prevalence of gametocytes below the detection limit of microscopy, QT-NASBA targeting Pfs25 mRNA as described by Schneider et al [20] was used on blot spots collected during follow-up.\n\nBODY.MATERIALS AND METHODS.GENOTYPING:\nIn order to discriminate between re-infection (RI) and recrudescence (RE), merozoite surface protein 1 and 2 (msp1 and msp2) and glutamate rich protein (GLURP) genotyping was performed as described by Snounou [24] on blood spots obtained at primary (day 0) and secondary infection (time point of re-occurrence). Blood spots were collected on Whatman 903 filter paper (Whatman international Ltd. Maidston, United Kingdom) and air-dried at room temperature for PCR analysis. DNA was isolated as described by Boom et al [23]. Molecular analysis was performed at Royal Tropical Institute, Amsterdam and was done blinded from the treatment that was given to the patients.\n\nBODY.MATERIALS AND METHODS.SAMPLE SIZE AND STATISTICAL ANALYSIS:\nThe aim of the study was to compare gametocytaemia after AL and DP and to compare assessment of gametocytaemia by microscopical examination versus QT-NASBA All data were entered in excel and analysed with SPSS for windows (version 12.0). Parasite densities were analysed after natural log-transformation. Where appropriate, proportions were compared with the χ2-test and means were compared with the one-way ANOVA or Student t-test. A simplified trapezoid area under the curve (AUC) analysis using gametocyte data from days 0, 3, 7, 14 and 28, as a surrogate for the infectiousness of the participants in the different treatment groups, was performed.\n\nBODY.RESULTS.PATIENT RECRUITMENT:\nIn total 1882 cases suspected of uncomplicated malaria were screened for eligibility into the study during an 8-week recruitment period in April and May 2007. 1,736 children were excluded because they did not meet the inclusion criteria (Figure 1). 146 patients fulfilling the inclusion criteria entered the study; 73 were randomly allocated to the DP arm and 73 to the AL arm. Both study groups were comparable at baseline for their demographical and clinical characteristics and parasite densities (Table 1). On completion of follow up (day 28) data of 134 patients (92%) were available for analysis. Table 1 Baseline characteristics of patients included in the study at the time of enrollment in the study Characteristic DHA-PQP (n = 73) ALN (n = 73) Sex ratio male:female 33:40 40:33 Age (in months), median (IQR) 60 (44) 52 (44) Body weight, mean kg (range) 17.62 (6–37) 17.32 (6–42) Temperature, mean °C ± SD 38.1 ± 0.99 37.8 ± 0.73 Haemoglobin mmol/L ± SD 6.33 ± 1.29 6.28 ± 1.27 Parasites/μl geometric mean (range) as determined by microscopy 12145 (1000–72640) 13379 (1080–72000) Figure 1Schematic representation and flowchart of the study. Twelve patients did not reach the study endpoint. Seven patients were lost during follow up, one was unable to take oral medication, one developed severe anaemia, one did not receive the proper drugs, one withdrew from the study and one patient died.\n\nBODY.RESULTS.TREATMENT OUTCOME:\nThere were no early treatment failures during the first three days of follow up. Only one patient in the AL arm had a recurrent parasitaemia (43,880 parasites/μl) at day 28 of follow up. Genotyping analysis revealed that this patient had a reinfection with P. falciparum. All other 133 patients who completed follow-up had an adequate clinical and parasitological response. After one day of treatment, over 90% of the patients had no microscopically detectable asexual parasites. In the AL group no parasites could be detected with microscopy in any of the patients at day two. One patient was still microscopically positive at day two in the DP group with 40 parasites/μl, but this patient was also microscopically negative at day 3. The parasite reduction ratios at 48 hours reproduction cycle (parasite count on admission/parasite count at 48 hours) was 8.96 * 105 at 48 hours for the AL treatment and 2.06 * 104 at 48 hours for the DP treatment. NASBA was also applied to monitor parasite dynamics below sub-microscopical level. Humidity in some of the filter papers degraded the RNA in the blood spots of some of the samples. This led to several extraction failures. In order to have a clear picture of parasite dynamics only those series with a full range of follow-up samples, i.e. 56 DP and 54 AL treated patients, were analysed. Both treatment arms showed a steep decline in parasitaemia from the day of enrollment (day 0) to day 1; 62% reduction after DP treatment and 89% reduction in the AL arm. At day 2, the level of parasitaemia was reduced to 1.2% in the AL group and 2.75% in the DP group.\n\nBODY.RESULTS.HB CONVALESCENCE, FEVER CLEARANCE AND ADVERSE EVENTS:\nAt baseline Hb levels in both treatment groups were comparable (Table 1). At day 28 all groups had a significant increase of Hb however no significant difference between the treatments on the Hb convalescence was found. Final mean Hb levels were 7.15 mmol/l ± 1.07 for the DP treatment group and 6.79 mmol/l ± 1.24 for the AL group. A possible influence of anaemia on gametocyte carriage at enrollment was not observed in the present study (p > 0.05). Fever clearance was defined as the time from receiving the assigned treatment to the time a normal body temperature was recorded (≤ 37.5°C), in study cases who presented with fever. Fever clearance was rapid in both study groups. On day 1, 10 cases (13.7%) in the DP group presented with fever and six cases (8.2%) were observed in the AL group. In the DP group fever was observed on day 2 in four cases (5.5%) and three cases (4.1%) on day 3. In the AL group cases with fever also presented on day 2 (12 study subjects, 16.4%) and on day 3 (two individuals, 2.7%). Fever was not observed during follow-up after day 7, with the exception of the child that presented with a P. falciparum reinfection on day 28 in the AL group and the child that developed broncho-pneumonia (case presented below). Furthermore the presence of fever at recruitment was no predictor for gametocyte carriage (p > 0.05) Most adverse events were mild, self limiting and consistent with symptoms of malaria. There was no significant difference between the two study groups (Table 2). One patient died. The child (63 months) had been ill for two weeks prior to presentation at the clinic. Plasmodium falciparum infection with parasitaemia of 20,120 parasites/μl was diagnosed. There was a fever (38.6°C), but there were no other complaints and no signs of severe anaemia (Hb: 6.4 mmol/L). On day 3, there were no signs of illness. On day 7, the child presented with fever (38.6°C), cough and complaints of anorexia. There was no history of significant illness or allergies. After examination (microscopy was negative for malaria parasites), broncho-pneumonia was diagnosed and the child was treated with oral phenoxymethylpenicillin for five days and Paracetamol syrup. On day 14, the child did not attend the follow-up visit, the parents reported that the child died a day before in a local health post. The event was assessed as unrelated to the study drug. Autopsy was not performed. Table 2 Summary of adverse events recorded during the study Adverse event DHA-PQP ALN P- value Headache 43 (58.9%) 37 (50.7%) 0.318 Abdominal pain 25 (32.4%) 26 (35.6%) 0.862 Weakness 19 (26.0%) 30 (41.1%) 0.035 Anorexia 8 (10.9%) 10 (13.7%) 0.439 Diarrhea 9 (12.3%) 7 (9.6%) 0.785 Cough 16 (21.9%) 17 (23.3%) 0.843 Vomiting 11 (15.1%) 9 (12.3%) 0.806 Pruritis 4 (5.5%) 3 (4.1%) 0.698 \n\nBODY.RESULTS.GAMETOCYTE DYNAMICS:\nThe presence of gametocytes in clinical samples was assessed by microscopy and NASBA and is presented in Table 3. At the start of the study, three patients in the DP arm (4.5%) and six patients in the AL arm (9.0%) carried microscopically detectable gametocytes. Microscopical follow-up of the presence of gametocytes during the whole study period revealed that in total 39 samples (distributed over 13 patients) in the DP arm and 18 samples (distributed over nine patients) in the AL arm carried gametocytes (not significantly different). It was observed that the microscopical detection of gametocytes in blood slides during the study was subjected to fluctuations (for example a case positive on day 0, negative on day 1 and subsequently again positive at day 3), which is probably due to the fact that gametocytes circulate at low levels. However, on day 7, three patients in the DP group and 1 in the AL group showed gametocyte positive slides for the first time. On day 28, in none of the cases gametocytes were observed by microscopy. There was no difference between children older than 60 months and younger as regards carriage of gametocytes and density. Table 3 Occurrence of gametocytes as detected by microscopy or NASBA in the different study groups at the start of the study and during subsequent follow-up. Gametocyte positive samples Day 0 Day 1 Day 2 Day 3 Day 7 Day 14 Day 28 DP group microscopy (n = 67) Total number of positive cases 3 7 7 10 7 5 0 Number of new cases observed 5 2 0 3 0 0 DP group Nasba (n = 56) Total number of positive cases 22 34 a 33 a 17 8 Number of new cases observed 13 8 5 0 AL group microscopy (n = 67) Total number of positive cases 6 3 3 3 2 1 0 Number of new cases observed 1 1 0 1 0 0 AL group Nasba (n = 54) Total number of positive cases 21 20 11 12 5 Number of new cases observed 6 4 6 0 a Number of gametocyte carriers detected with NASBA is significantly higher in the DP treated group compared to the AL treated group. NASBA analysis on 56 DP treated subjects and 54 AL treated subjects detected strikingly more gametocyte carriers at the start of the study compared to microscopy; i.e. 22 study subjects in the DP arm (39.3%) and 21 in the AL (38.9%) were harbouring gametocytes before. The pfs25 NASBA revealed that 34 cases (60.7%) were gametocyte positive in the DP group on day 3, of which 13 cases were newly identified compared to day 0. In contrast, a significantly lower number of patients (20; 37%) were gametocyte positive in the AL treatment group on day 3, of which six new cases. This trend was also observed on day 7: the DP arm had 33 (58.9%) gametocyte positive samples, whereas the AL treated group had a significantly lower number of gametocyte positive samples (11, 20.3%). However on day 14 (AL: 12 positive [22.2%], six new; DP: 17 [30.4%], five new) and day 28 (AL: 5 positive [9.3%], no new cases; DP: 8 [14.3%], no new cases) of follow-up no significant difference in gametocyte carriage was observed between both treatment groups. The AUC (day 0–28) of the two treatment groups was calculated to be 20.0 infectious persons/day for the DP treatment arm and 10.5 infectious persons/day for the AL treatment arm. Stratifying the data for age under and above 60 months showed no difference in either of the groups.\n\nBODY.DISCUSSION:\nSeveral studies have analysed the efficacy and tolerability of AL and DP and all show very good results [1,4,6,7,12,14,25]. In the present study, the two drugs showed to be similar with respect to effectiveness and tolerability compared to other studies. However, most of these studies have a follow up of 42 days which makes a direct comparison of the results difficult. No adverse events other than those related to malaria itself were observed in the current study, which is in line with other reports. In the present study, all children experienced haemoglobin convalescence without difference between the two treatment arms, in contrast to the study of Kamya et al, who found a greater increase in the DP treated patients [4]. The difference in follow-up time and the numbers of patients included in both studies may be responsible for this difference. Further studies with comparable study length should be done to give an answer to these discrepancies. The effects on gametocytaemia and possibly malaria transmission deserve further study. Whereas asexual parasites were cleared in three days after the initiation of the two treatment schedules, gametocytaemia appeared different when assessed by microscopy as well as with NASBA. Gametocytes were present in low numbers throughout follow-up in both study groups. Artemisinin derivatives have in general a negative effect on gametocyte development and survival and thus influence malaria transmission, at least in low transmission areas [15-17]. In this study, the actual infectiousness of the remaining gametocyte populations in both treatment arms was not assessed; the presence of gametocytes does not necessarily mean that they actually contribute to transmission. Several studies have shown that gametocytes persist in a large population of previously infected and treated children [5,19,26,27]. A large proportion of these carriers has a parasite load below microscopical detection limit, a load that can be detected with molecular assays like NASBA. Patients with submicroscopic parasite densities may still be infectious to mosquitoes and may contribute to transmission [14,19], as confirmed with membrane feeding experiments [18,28]. Studies that include reduction of transmission as a component of efficacy of drugs, thus need to incorporate highly sensitive molecular assays to reliably assess gametocyte densities. The present study showed a limited effect of DP on gametocyte development in comparison with AL when a sensitive tool like NASBA is used for gametocyte detection, which could limit the usefulness of DP to areas with low transmission but this finding should be further investigated in larger studies in different study sites with different transmission intensities. It is not clear if plasma concentrations of dihydroartemisinin in the blood could play a role. Dihyrodartemisinin is the major and the active metabolite of artemether. So far, no studies have been performed that compare the plasma levels of dihydroartemisinin when given as such or after administration of artemether and subsequent metabolisation. This should be further investigated together with effect on gametocytogenesis, which should incorporate a sensitive detection tool for gametocytes such as NASBA. The effect that drugs can have on gametocyte clearance as measured with NASBA could have some implications for the introduction drugs and especially the introduction of new drugs. This study showed that with sensitive detection tools a difference in parasite clearance can be observed but these results should be confirmed in larger studies and in other study areas with different malaria transmission intensities. Transmission intensity varies significantly in the different African countries and within a country high and low transmission areas can often be identified. Malaria endemic countries generally have a national malaria drug policy for the whole country. Although this is logical from a practical and logistical point of view, it may not be the best approach for effective malaria control. It could, therefore, be more effective if a country develops specific drug policies to suit regional instead of national requirements.\n\nBODY.COMPETING INTERESTS:\nThe authors declare not to have a conflict of interest. The organizations that provided financial support had no influence in the design, the actual field and laboratory work, analysis and writing.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nPM was involved in the conception of the study, carried out the molecular analysis of the samples, carried out the statistical analysis of the results and drafted the manuscript. PS coordinated the study in the field and was responsible for the clinical examination and treatment of the participants. SvA collected clinical data and blood samples in the field and did the clinical data analysis. SO coordinated the study in the field an arranged the logistics in the field. IV carried out the NASBA assays and the subsequent analysis. HS was involved in the design of the study and contributed to the drafting of the manuscript. PK as involved in the design of the study, critically read and improved the manuscript. All authors read and approved the final manuscript\n\n**Question:** Compared to dihydroartemisinin-piperaquine (AP) what was the result of arthemeter-lumefantrine (AL) on Adverse events?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
377
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 320/9 μg on 24-hour urinary cortisol at 6 months?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
218
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A heart failure self-management program for patients of all literacy levels: A randomized, controlled trial [ISRCTN11535170]\n\n ABSTRACT.BACKGROUND:\nSelf-management programs for patients with heart failure can reduce hospitalizations and mortality. However, no programs have analyzed their usefulness for patients with low literacy. We compared the efficacy of a heart failure self-management program designed for patients with low literacy versus usual care.\n\nABSTRACT.METHODS:\nWe performed a 12-month randomized controlled trial. From November 2001 to April 2003, we enrolled participants aged 30–80, who had heart failure and took furosemide. Intervention patients received education on self-care emphasizing daily weight measurement, diuretic dose self-adjustment, and symptom recognition and response. Picture-based educational materials, a digital scale, and scheduled telephone follow-up were provided to reinforce adherence. Control patients received a generic heart failure brochure and usual care. Primary outcomes were combined hospitalization or death, and heart failure-related quality of life.\n\nABSTRACT.RESULTS:\n123 patients (64 control, 59 intervention) participated; 41% had inadequate literacy. Patients in the intervention group had a lower rate of hospitalization or death (crude incidence rate ratio (IRR) = 0.69; CI 0.4, 1.2; adjusted IRR = 0.53; CI 0.32, 0.89). This difference was larger for patients with low literacy (IRR = 0.39; CI 0.16, 0.91) than for higher literacy (IRR = 0.56; CI 0.3, 1.04), but the interaction was not statistically significant. At 12 months, more patients in the intervention group reported monitoring weights daily (79% vs. 29%, p < 0.0001). After adjusting for baseline demographic and treatment differences, we found no difference in heart failure-related quality of life at 12 months (difference = -2; CI -5, +9).\n\nABSTRACT.CONCLUSION:\nA primary care-based heart failure self-management program designed for patients with low literacy reduces the risk of hospitalizations or death.\n\nBODY.BACKGROUND:\nLimited literacy skills are common among adults in the United States [1]. Low literacy is associated with increased risk of hospitalization and worse control of chronic diseases [1-4]. Heart failure is a common chronic illness requiring multiple medications and significant self-care. Heart failure is the leading cause of hospitalization in the Medicare population [5]. The complexity of care for heart failure puts people with low literacy at considerable risk for adverse outcomes including hospitalization, worse quality of life, and mortality. Heart failure disease-management interventions appear effective in reducing rehospitalizations and improving quality of life [6]. Most randomized clinical trials of heart failure disease management completed over the last 10 years have enrolled patients during, or shortly after, hospitalization and reported the outcome of readmission [6]. Although the designs of these programs vary, several have tested education and support to enhance patient self-management as the main component of the intervention [7-10]. The content of self-management education usually includes teaching to understand medications, reduce salt intake, monitor daily weights, and recognize symptoms. Most programs include structured follow-up either by home visit, phone, or mail. Only a few, uncontrolled studies specifically ask patients to self-adjust their diuretics [11,12]. Heart failure self-management programs may be particularly effective for vulnerable populations, such as those with poor literacy [13,14]. However, to our knowledge, no previous studies have explicitly examined the role of self-management programs in a low literacy population. A recently published study and accompanying editorial suggested that such self-management support may be most effective among vulnerable populations [13,14]. Low literacy may represent a vulnerability for which we should design our programs. Disease management for patients with low literacy may require refined approaches to foster self-management skills. We developed a heart failure self-management program for use by patients with a variety of literacy levels [15]. We performed a randomized controlled trial comparing our self-management program to usual care among outpatients to test if the program could reduce hospitalizations and improve heart failure-related quality of life.\n\nBODY.METHODS.STUDY DESIGN:\nWe conducted a randomized controlled trial in the University of North Carolina (UNC) General Internal Medicine Practice, which serves a wide socioeconomic range of patients. The practice, staffed by over 20 attending faculty and 70 medical residents, cares for over 500 patients with heart failure.\n\nBODY.METHODS.STUDY PARTICIPANTS:\nTo be eligible, patients had to have a clinical diagnosis of heart failure confirmed by their primary provider through a direct interview, and one of the following: 1) chest x-ray findings consistent with heart failure, 2) ejection fraction <40% by any method, or 3) a history of peripheral edema. They also had to have New York Heart Association class II-IV symptoms within the last 3 months. Patients were excluded if they had moderate to severe dementia (based on the treating physician's clinical judgment), terminal illness with life expectancy less than 6 months, severe hearing impairment, blindness, current substance abuse, a serum creatinine >4 mg/dl or on dialysis, a requirement of supplemental oxygen at home, lacked a telephone, or were scheduled to undergo cardiac surgery or awaiting heart transplant. We did not exclude patients on the basis of literacy skill because we felt patients of all literacy levels would benefit from this intervention. Patients who read well often prefer information presented in an easy-to-read format [16]. We accepted referrals from the cardiology clinic if patients met eligibility criteria. This study was approved by the Institutional Review Board of the UNC School of Medicine, and all participants gave informed consent prior to enrollment. For participants who could not adequately read the informed consent document, the research assistant read and explained it to them. They were asked to sign a short form indicating that the informed consent document was reviewed and they agreed to participate. When the short form was used, a witness was asked to attest to the adequacy of the consent process.\n\nBODY.METHODS.PROCEDURES:\nParticipants were recruited between November 2001 and April 2003 from the General Internal Medicine and Cardiology Practices at UNC Hospitals. A trained research assistant screened all patients age 30–80 for use of furosemide. If the patient was on furosemide, their physician was queried about the presence of heart failure. If the patient met eligibility criteria and consented to participate, baseline data were collected. We then randomized patients by concealed allocation based on a random number generator to receive the intervention or usual care. All patients were followed for one year. All data were collected in the General Internal Medicine Practice.\n\nBODY.METHODS.INTERVENTION:\nThe intervention was delivered in the General Internal Medicine Practice. The educational materials and disease management intervention were previously described in detail, and the intervention is summarized here [15]. The intervention began with a 1-hour educational session with a clinical pharmacist or health educator during a regular clinic visit. Patients were given an educational booklet designed for low literacy patients (written below the 6th grade level and extensively pre-tested in focus groups and a pilot study [15]) and a digital scale. The educator and patient reviewed the booklet together, including management scenarios. As part of the educational session, patients were taught to identify signs of heart failure exacerbation, perform daily weight assessment, and adjust their diuretic dose. Because this intervention was aimed at patients with low literacy, the health educator used pedagogic strategies felt to improve comprehension for patients with low literacy [17]. For example, the educator had the patient teach back the information [18], engaged the patient in filling out the notebook, and used brainstorming to help the patient incorporate self-management into their lives. The educator, patient, and primary care physician collaborated to establish the patient's \"good weight\" (i.e., where the patient's heart failure was stable) and baseline diuretic dose. The educator then filled in the management plan in the patient's notebook to help the patient better manage weight fluctuations and self-adjust the diuretic dose based on weight (Figure 1). The general plan involved doubling the dosage if weight went up and halving it if weight went down. The program coordinator then made scheduled follow-up phone calls (days 3, 7, 14, 21, 28, 56) and monthly during months 3–6. The follow-up phone calls, each lasting 5–15 minutes, were designed to reinforce the educational session and provide motivation for the patients. Again, the program coordinator had the patient describe their self-management practices and offered feedback to improve them. Patients experiencing worsening symptoms were scheduled acute visits with their physician. We did not provide specialized nursing assessment, care or medication advice beyond diuretic dosing. If the patient's doctor determined that the good weight had changed, the program coordinator would revise the care plan with the patient. Patients enrolled in the control group received a general heart failure education pamphlet written at approximately the 7th grade level, and continued with usual care from their primary physician. The only contacts between the research team and the control patients were at enrollment and data collection.\n\nBODY.METHODS.MEASURES:\nWe assessed outcomes at 6 and 12 months through in-person interviews and review of the medical record. To be sensitive to low literacy, all interviews were conducted verbally by a trained research assistant. If patients were unable to come to clinic for the interview, it was conducted by phone. The research assistant was not blinded to the patient's study group. Primary outcomes were death or all-cause readmission and heart failure-related quality of life at the end of 12 months. Data on hospitalization dates were obtained from the patient and confirmed by medical chart review. All admissions, regardless of hospital or cause, were counted. For exploratory analyses, we classified reason for admission as cardiac or non-cardiac. Cardiac admissions included those primarily for heart failure (e.g., shortness of breath and edema relieved by diuresis) and other cardiac causes such as chest pain, arrhythmias, or syncope. Cause of admission was determined by chart review by one of the authors (D.D.) who was blinded to treatment allocation. Heart failure-related quality of life was assessed using a modified version of the Minnesota Living with Heart Failure Questionnaire (MLHF). The MLHF is a 21 question instrument with a 6-point Likert scale response option and scores ranging from 0 to 105 [19]. In pilot testing of the MLHF, we found that low literacy patients had trouble answering questions with the standard 6-point Likert scale [15], so we changed the response scale to 4 points, using a visual display with stars to represent increasing severity. The 4-point Likert scale was scored as 0, 1, 3, and 5 to approximate standard scores on the MLHF. Secondary measures included heart failure self-efficacy, knowledge, and behaviors. Self-efficacy was measured with an 8 item scale developed for the behaviors needed in this trial as suggested by self-efficacy scale developers [20]. Respondents used a 4-point Likert scale yielding a total score from 0–24. We assessed heart failure knowledge using a knowledge test previously developed for this population [15], Heart failure self-management behavior was assessed by asking patients how often they weighed themselves. We used patient self-report and the medical chart to measure age, gender, race, insurance status, income, years of education, medication use, years with heart failure, and the presence of co-morbidities. We measured literacy using the Short Test of Functional Health Literacy in Adults (S-TOFHLA) [21], a well-validated scale that correlates well with other measures of reading ability [22]. Patients who scored in the inadequate literacy range on the S-TOFHLA were considered to have \"low literacy.\" The cut-point for inadequate literacy is roughly analogous to the 4th grade reading level.\n\nBODY.METHODS.SAMPLE SIZE:\nSample size was based on pilot study results showing a 9-point improvement in MLHF scores over 3-months with the intervention [15]. Detecting a 9-point difference between intervention and control group with 80% power and alpha set at 0.05 required 70 patients per group. We aimed to enroll 150 patients to account for possible attrition, but stopped at 127 because funding ended. We did not power this study to detect differences in hospitalization, but studies with even smaller numbers of patients have shown a difference for that outcome [7].\n\nBODY.METHODS.STATISTICAL METHODS:\nPatients who did not return any phone calls and did not return for follow-up assessment did not have outcome data for analysis. Patients who withdrew from the study were censored at the time of withdrawal; any data collected prior to withdrawal were included in the analysis. Baseline differences between groups were assessed using t-tests for continuous outcomes and chi-squared tests for categorical outcomes. For MLHF, heart failure self-efficacy and heart failure knowledge, we used two-sample t-tests. Non-parametric tests were also performed for all comparisons, but results did not differ from the parametric tests. Because of the small sample size and the unequal distribution of baseline characteristics, we adjusted for baseline differences using linear regression. Analyses of self-reported outcomes, such as MLHF, excluded patients who died or withdrew from the study before 6 or 12 month data was collected. For hospitalization or death, we used negative binomial regression and censored patients at the time of death or withdrawal from the study. Based on the likelihood ratio test, negative binomial regression was a better fit for the data than a Poisson regression. Additionally, the Vuong test confirmed that a zero-inflated model was inappropriate [23]. Because of uneven distribution of baseline characteristics, we performed analysis of covariance with negative binomial regression to control for baseline differences [24]. We identified the following variables that could contribute to hospitalization or death based on previous studies: age, race, gender, literacy level, hypertension, diabetes, creatinine, MLHF score, use of β-blockers, angiotensin converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), use of digoxin, and presence of systolic dysfunction [7,25]. Variables were not included in the model if the relationship between the variable and the study group or outcome had a p value greater than 0.3. We started with a model including the following items to arrive at the best point estimate: age, gender, hypertension, creatinine, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs. We then eliminated variables from the model if p > 0.30, and if the point estimate remained within 10% of the initial estimate. We prespecified a subgroup analysis in patients with low literacy to analyze if the intervention had a similar effect. The same analysis described above was repeated for the subgroups of patients with low literacy and those with higher literacy. The initial multivariate model for the subgroups analysis included: age, gender, hypertension, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs.\n\nBODY.METHODS.ROLE OF THE FUNDING SOURCE:\nThe funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.\n\nBODY.RESULTS.PATIENTS:\nWe screened 919 patients on furosemide between November 2001 and April 2003. 127 met eligibility criteria and agreed to participate (Figure 2). Of those not enrolled, 407 did not have heart failure according to their physician, 367 did not meet eligibility criteria and 27 declined to participate. Of those randomized to the control group, 1 never returned after the first visit, 1 withdrew during the study and 4 died during the study. Follow-up was completed for all of the remaining participants (98%) (Figure 3). Of those randomized to the intervention group, 3 never returned after the first visit, 4 withdrew during the study and 3 died during the study. Follow-up was completed for all of the remaining participants (93%). At baseline, most characteristics were similar between the two groups (Table 1). However, the control group had more participants with hypertension, fewer with diabetes, and fewer men. Of heart failure related characteristics, the control group had more participants with systolic dysfunction, and taking β-blockers. The intervention group had more participants taking ACE inhibitors or ARBs, and digoxin. Regardless of these differences, none were statistically significant. The control group did have statistically significantly higher baseline MLHF scores representing worse symptoms at baseline.\n\nBODY.RESULTS.HOSPITALIZATION OR DEATH:\nThere were 68 hospitalizations (65) or deaths (3) in the intervention group and 111 (107 hospitalizations, 4 deaths) in the control group. The crude all-cause hospital admission or death incidence rate ratio (IRR) was 0.69 (95% CI 0.40, 1.19). After adjusting for age, gender, use of ACE inhibitor or ARB, use of a β-blocker, presence of hypertension, and baseline MLHF, intervention patients were less likely to have the outcome (IRR = 0.53; 95% CI 0.32, 0.89). 61% of patients in the control group had at least one hospitalization or died, and 42% of patients in the intervention group had at least 1 hospitalization or died (p = 0.13).\n\nBODY.RESULTS.CARDIAC HOSPITALIZATION:\n39% of patients in the control group and 34% of patients in the intervention group had at least one hospitalization for cardiac causes (p = 0.55). The unadjusted IRR was 0.79 (95% CI 0.42, 1.5). After adjusting for baseline differences, the IRR was 0.85 (95% CI 0.44, 1.7).\n\nBODY.RESULTS.HEART FAILURE-RELATED QUALITY OF LIFE:\nIn unadjusted analysis, the control group, on average, improved 5 points on the MLHF and the intervention group improved by 1 point. The difference was not statistically significant (3.5 points, 95% CI 11, -4, p = 0.36). After adjusting for baseline differences between the groups, the difference was 2 points (95% CI 9, -5, p = 0.59) suggesting no effect on heart failure-related quality of life.\n\nBODY.RESULTS.OTHER OUTCOMES.KNOWLEDGE:\nHeart failure related knowledge improved more in the intervention group than in the control group. Mean difference in score improvement was 12 percentage points (95% CI 6, 18; p < 0.001).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-EFFICACY:\nHeart failure self-efficacy improved more in the intervention group than in the control group. Mean difference in score improvement was 2 points (95% CI 0.7, 3.1; p = 0.0026).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-CARE BEHAVIOR:\nSignificantly more patients in the intervention group than in the control group reported daily weight measurement at 12 months (79% vs. 29%, p < 0.001).\n\nBODY.RESULTS.SUBGROUP ANALYSES ACCORDING TO LITERACY:\nTwenty-four patients in each group had inadequate literacy based on the S-TOFHLA (Table 2). Among these patients, there was no difference in quality of life score in unadjusted and adjusted analyses (difference = -1.6; 95% CI -15, 12); p = 0.81). For the combined outcome of hospitalization or death, the unadjusted IRR was 0.77 (95% CI 0.30, 1.94). After adjusting for baseline differences, the IRR was 0.39 (95% CI 0.16, 0.91). Seventy-five patients had marginal or adequate literacy based on the S-TOFHLA. We found no difference in quality of life score in unadjusted and adjusted analyses (difference = -4.2; 95% CI -14, 6; p = 0.40). Among patients in the higher literacy group, the unadjusted IRR for hospitalization or death was 0.65 (95% CI 0.33, 1.3). After adjusting for baseline differences, the IRR was 0.56 (95% CI 0.30, 1.04). We did not find a statistically significant effect modification between literacy and the intervention.\n\nBODY.DISCUSSION:\nA heart failure self-management program designed for patients with low literacy reduced the rate of the combined endpoint of hospitalization or death. The prespecified subgroup analyses suggest that patients with low literacy benefited as much from the intervention as the patients with higher literacy. The success of our intervention reflects the goals of our program. We designed an easy-to-read and use educational booklet and self-management plan, and focused on overcoming barriers to learning self-management [15]. Our intervention was founded on teaching self-management. We focused on helping patients understand signs and symptoms of worsening heart failure and perform self-adjustment of diuretics based on weight fluctuation. Many care providers would not attempt to teach patients, particularly those with low literacy, how to self-adjust their diuretic medication. We found that, with careful teaching, many patients incorporated this strategy into their daily routine successfully, as demonstrated by improved self-care behaviors. Teaching self-adjustment of diuretics, rather than the conventional teaching to call the care provider if weight fluctuates, empowers patients to take more control over their illness. Self-adjustment of diuretic dose is a prominent aspect of the self-management training we provided to the intervention patients. Other programs to improve patient self-management have not been explicit in teaching patients to self-adjust their diuretic dose based on weight fluctuation. Although our outcomes are comparable to others', using this approach puts more control into the hands of the patient. Furthermore, our intervention appears effective among patients with low literacy skills, a group often overlooked for empowering interventions. Our study adds to the growing literature on disease management programs for patients with heart failure [6], particularly those that focus on self-management training [7-10]. Studies focusing on self-management training have demonstrated comparable improvements in hospitalization rates to more comprehensive programs that aim to improve the quality of pharmaceutical prescribing, provide home visits, and take place in specialized cardiology clinics [6]. Such comprehensive programs have also been shown to reduce mortality, but self-management programs have not [6]. We did not detect any difference in heart failure related quality of life which was the outcome we powered our study to detect. Other self-management studies that have found improved quality of life have enrolled patients during a heart failure hospitalization [8,9]; however, we enrolled patients in the outpatient setting while they were clinically stable. Improving quality of life for stable outpatients may be more difficult for this type of intervention. We have previously reported that patients with diabetes and low literacy benefited more from a disease management intervention than those with higher literacy skills [26]. A similar result in two different chronic diseases substantiates the claim that appropriately designed disease management programs may have greater effect for low literacy or vulnerable populations, who are most at risk for adverse outcomes with usual care. This finding is particularly important in light of the recent study by DeBusk and colleagues that did not find a reduction in hospitalization with a well-designed comprehensive intervention [13]. The authors and an accompanying editorial [14] suggested that the failure to detect improvement may have occurred because the patients studied were less at-risk than in other studies. They called for more research to determine better ways of targeting disease management. We believe that low literacy is an important marker for vulnerability to adverse outcomes, and that disease management programs targeted to patients with low literacy may be an effective way of focusing resources on those most able to benefit. If patients with low literacy are to be preferentially recruited for such programs, innovative outreach and screening efforts will likely be required, as patients with low literacy may face particular barriers to accessing such care. This study should be interpreted in light of its limitations. Research assistants were not blind to group assignment during the assessment of self-reported outcomes. As such, patients in the intervention may have been more likely to inflate their responses in an effort to please the interviewer. This effect would tend to inflate patient responses to the subjective assessments of heart failure-related quality of life, self-efficacy, and self-care behaviors. The MLHF questionnaire was modified from its original form to make it easier for patients with low literacy to respond. This change in the scale may have changed its ability to detect important changes in heart failure related quality of life. Because the groups' mean scores were almost identical, we do not feel this limitation changed our results. In a similar vein, most questionnaires are not validated in low literacy populations, raising questions as to their ability to perform to the same standards. Our sample size was small, which did not allow for an even distribution of baseline variables among the groups. We controlled for baseline differences between groups in our analysis. While it is controversial whether or not to control for baseline differences in randomized controlled trials, some analysts have argued that doing so improves the power without introducing bias [24]. A larger, multi-site study would offer better control of confounders, better generalizability, and more power to determine differences in effect according to literacy. We did not collect data on the resources needed to implement this type of intervention in usual settings, and such a study and cost-effectiveness analysis would be helpful for most interventions of this type. We used health educators, not nurses or physicians, to deliver the intervention. By using less highly trained individuals to deliver the intervention, we enabled nurses and physicians to perform clinical tasks more commensurate with their training. Other studies that have performed global cost estimates have found that the savings from reductions in hospitalizations exceed the cost of the intervention [6].\n\nBODY.CONCLUSION:\nIn conclusion, our heart failure self-management program, designed for patients of all literacy levels, appears to reduce rates of hospitalization and death. Patients with low literacy, and other vulnerable patients, may stand to benefit most from these programs. Further research into the design, implementation, and dissemination of disease management programs for low literacy patients will be crucial for meeting the health care needs of the growing population of patients with chronic illness.\n\nBODY.COMPETING INTERESTS:\nDrs. DeWalt and Pignone have received honoraria and grants from Pfizer, Inc., Dr. Rothman has received grants from Pfizer, Inc., and Dr. Sueta is in their speakers bureau.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nDD conceived of the study, participated in its design and coordination, performed statistical analyses, interpretation of the data, and drafted the manuscript. RM, MB conceived of the study and participated in its coordination. MK, KC coordinated the study, and collected the data, RR, CS participated in study design and interpretation of the data. MP conceived of the study, participated in its design and coordination, and interpretation of the data. All authors reviewed the manuscript for important intellectual content and gave final approval.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to Standard information about self-care what was the result of Follow-up and thorough education on self-care on Baseline diabetes?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
255
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Impact and Process Evaluation of Integrated Community and Clinic-Based HIV-1 Control: A Cluster-Randomised Trial in Eastern Zimbabwe\n\n ABSTRACT.BACKGROUND:\nHIV-1 control in sub-Saharan Africa requires cost-effective and sustainable programmes that promote behaviour change and reduce cofactor sexually transmitted infections (STIs) at the population and individual levels.\n\nABSTRACT.METHODS AND FINDINGS:\nWe measured the feasibility of community-based peer education, free condom distribution, income-generating projects, and clinic-based STI treatment and counselling services and evaluated their impact on the incidence of HIV-1 measured over a 3-y period in a cluster-randomised controlled trial in eastern Zimbabwe. Analysis of primary outcomes was on an intention-to-treat basis. The income-generating projects proved impossible to implement in the prevailing economic climate. Despite greater programme activity and knowledge in the intervention communities, the incidence rate ratio of HIV-1 was 1.27 (95% confidence interval [CI] 0.92–1.75) compared to the control communities. No evidence was found for reduced incidence of self-reported STI symptoms or high-risk sexual behaviour in the intervention communities. Males who attended programme meetings had lower HIV-1 incidence (incidence rate ratio 0.48, 95% CI 0.24–0.98), and fewer men who attended programme meetings reported unprotected sex with casual partners (odds ratio 0.45, 95% CI 0.28–0.75). More male STI patients in the intervention communities reported cessation of symptoms (odds ratio 2.49, 95% CI 1.21–5.12).\n\nABSTRACT.CONCLUSIONS:\nIntegrated peer education, condom distribution, and syndromic STI management did not reduce population-level HIV-1 incidence in a declining epidemic, despite reducing HIV-1 incidence in the immediate male target group. Our results highlight the need to assess the community-level impact of interventions that are effective amongst targeted population sub-groups.\n\nBODY.INTRODUCTION:\nHIV-1–prevalence declines may now be occurring in some sub-Saharan African countries [1]. However, there remains little direct evidence that prevention measures—rather than natural HIV-1 epidemic dynamics [2] or behaviour change prompted by mortality [3]—have contributed to the slowing of HIV-1 epidemics [4,5]. Syndromic management of sexually transmitted infections (STIs) proved effective early in an HIV-1 epidemic in north-west Tanzania [6]. Peer education to promote safe behaviours showed promise in early process evaluations [7], but a randomised controlled trial (RCT) of factory workers in Harare, Zimbabwe, done in the mid-1990s, proved inconclusive [8]. Subsequent RCTs of syndromic management [9] and mass treatment of STIs [10], together with an information, education, and communication (IEC) behaviour-change programme [9], showed no effect in more mature epidemics. Integrated implementation of synergistic community-based HIV-1 control strategies could be a more cost-effective and sustainable approach to HIV-1 prevention than parallel application of vertical (top-down) programmes [11]. One scientific evaluation of such a strategy has been reported in which a combination of IEC activities amongst the general population and syndromic STI management showed no impact on HIV-1 incidence at the population level [9], although participation in the IEC activities was associated with reduced HIV-1 infection in women [12]. We conducted a cluster-RCT to test the hypothesis that integrated implementation of combined community- and clinic-based HIV-1 prevention, in which IEC activities focus primarily on high-risk populations, can be feasible and effective in reducing HIV-1 incidence in a major maturing epidemic in eastern Zimbabwe (Protocols S1 and S2; Text S1 and S2).\n\nBODY.METHODS.PARTICIPANTS AND RANDOMISATION PROCEDURE:\nThe study communities comprised six pairs of communities matched by socio-economic type—small town, tea/coffee estate, forestry plantation, roadside trading settlement, and subsistence farming area (two pairs) (Figure 1). Each community included at least one Government or Mission health centre. It was anticipated that HIV-1 incidence would be similar within each pair of communities. Within each pair, one community was assigned at random (un-blinded coin toss by a Ministry of Health official witnessed by programme and research personnel) to receive the additional intervention and the other to be the control. These procedures were designed to ensure that Mission, non-governmental organisation, and private sector programmes (for details, please refer to the following section) would be distributed evenly between intervention and control sites. Figure 1Location of Intervention and Control Communities in Manicaland Province, Eastern Zimbabwe We assessed the effect of the intervention using results from laboratory tests for HIV-1 infection and questionnaire data collected in the baseline and 3-y follow-up rounds of a population-based, closed-cohort survey. The 12 study communities were enumerated in a phased manner, with paired communities being enumerated consecutively to minimise the effects of any seasonal factors. HIV-1–prevention activities were commenced in each intervention community shortly after completion of the baseline survey in that community. In each community, individuals eligible for the study were identified in the first round using data from household listings prepared in an initial census. All males and females aged 17–54 y and 15–44 y at last birthday (the age groups expected to have the highest incidence of HIV infection), respectively, who had slept in a household in the community for at least four nights in the previous month, and who had also done so at the same time 1 y earlier, were considered eligible for the study. In heterosexually driven HIV-1 epidemics, risk of infection can be correlated amongst marital partners [13]. Therefore, to maximise statistical power to detect differences in HIV-1 incidence, enrolment was restricted to one randomly selected member per marital group.\n\nBODY.METHODS.INTERVENTIONS:\nIntervention and control communities were to receive standard Government services including basic syndromic STI management, condom distribution from health clinics and Zimbabwe National Family Planning Council outlets, home-based care, and limited HIV/AIDS–focussed IEC activities (e.g., occasional AIDS-awareness meetings and distribution of posters and leaflets). In addition, social marketing of male and female condoms would be provided through an ongoing national programme [14]. The intervention comprised targeted and population-level strategies to promote safer sexual behaviour and to improve treatment of STIs that facilitate HIV-1 transmission. The intervention strategies were implemented by two local non-governmental organisations (Family AIDS Caring Trust and the Biomedical Research and Training Institute) and the Zimbabwe Ministry of Health and Child Welfare through an integrated programme of community- and clinic-based activities. Integration of the individual programme components was achieved through the joint involvement of the participating agencies in the planning and implementation of activities and through the inclusion of biomedical and behavioural aspects within each component. The programme design comprised three key components: (1) peer education and condom distribution amongst commercial sex workers and male clients at workplaces and in the general community, supported by income-generating projects; (2) strengthened syndromic management of STI services at local health centres; and (3) open days with HIV/AIDS IEC activities at health centres to promote safer sexual behaviour and to increase the uptake of local STI treatment services. The peer-education component was based on a model which had been developed by the Project Support Group at the University of Zimbabwe [7] and which had been widely implemented within Zimbabwe and neighbouring countries. Activities were held weekly at workplaces and at locations within the general community (e.g., beer halls and markets) where casual relationships were most frequently formed [15]. The target population comprised sex workers and male clients who form a bridge population in HIV transmission [16] between sex workers and the monogamous (or serial monogamous) majority of women [17,18]. It was posited that the high HIV-1 incidence observed amongst young women could be reduced by altering the behaviour of their older male partners whose own behaviour was intrinsically more risky [19]. The behavioural component would be reinforced in counselling sessions with STI patients and through micro-credit income-generating projects to reduce unmarried women's dependence on commercial sex work. The micro-credit scheme consisted of small interest-free loans repayable over 10 mo, provided to groups and to individuals together with training in small-business management. The targeted activities would be extended to the general population through open days held at local health centres. Besides providing basic HIV/AIDS information, it was envisaged that programme meetings and activities, by their continuous nature, would sustain high levels of awareness of the risks of HIV transmission and would facilitate renegotiation of community social norms, making safer behaviours easier to adopt. The key messages of the programme were: (1) remain faithful to one regular sexual partner; (2) use condoms consistently with any casual sexual partners; and (3) seek prompt and effective treatment for any STIs. Syndromic management of STIs at primary healthcare centres was first introduced in Zimbabwe in the 1980s [20] and formed the basis of STI diagnosis and treatment services at baseline in the intervention and control communities. It was envisaged that these services could be strengthened and made more effective through a programme of regular classroom training and on-site supervision of nursing staff, through the introduction of training in systemic counselling for STI patients, and through the provision of small quantities of treatment drugs to cover delays in routine supplies. Quality-assurance procedures applied in the intervention communities included pre- and post-training tests for peer educators and, for nursing staff, attending the syndromic STI management and systemic counselling courses, regular on-site supervision (including random spot checks) and training, refresher courses, routine planning meetings and monitoring of service statistics, and quarterly workshops where detailed programme procedures were reviewed and updated. An interim qualitative process evaluation of intervention activities was conducted during the inter-survey period, and a report on the findings was provided to the implementing organisations.\n\nBODY.METHODS.OUTCOME AND PROCESS MEASURES:\nThe primary outcome of the study was HIV-1 incidence at the community level amongst individuals who were uninfected at baseline. Blood was collected onto Whatman No. 3 filter paper and transported to the Biomedical Research and Training Institute laboratory in Harare. Blood spots were air dried at 4 °C and, for long-term (>1 mo) storage, were kept at −20 °C. For baseline studies, blood was eluted into phosphate-buffered saline, and antibodies to HIV were detected using a dipstick dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, http://www.path.org; produced locally in Thailand]) and a standard protocol [21,22]. All positive results and a 10% sample of negative results were confirmed using a plate EIA (Abbott Third-Generation HIV-1/HIV-2 EIA [http://www.abbott.com] or Genelavia MIXT HIV-1/HIV-2 [Sanofi Diagnostics Pasteur, Marnes La Coquette, France]). At follow-up, a similar protocol was followed. Only the samples from those participants recorded as being HIV seronegative at baseline were tested at follow-up, again using a dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, produced locally in India]). Where seroconversion was indicated, the frozen stored baseline sample was retested to confirm the original negative result using the same dot EIA test. Where the baseline result remained negative, the Abbott EIA test was used to confirm both baseline and follow-up results. The change in place of manufacture of the dot EIA and the exclusive use of Abbott test kits to confirm positive sera at follow-up was due only to changes in the supply of test reagents, and not to perceived changes in sensitivity or specificity [23]. Apart from the principal investigators (based in Harare, London and Oxford) and those nurses given permission by participants requesting voluntary counselling and testing (VCT), all research personnel remained blind to the HIV-1 status of individual participants. Secondary outcomes, measured at the community and individual level, were self-reported genital ulcers and urethral or vaginal discharge in the past year (STI cases), STI treatment effectiveness (self-reported cessation of symptoms), indicators of sexual and health-seeking behaviour change, and HIV/AIDS knowledge. The behaviour-change variables assessed were sexual debut, sexual partner change in the past year, non-regular partnerships in the past month, and unprotected sex with regular and casual partners in the past 3 y. The data on sexual partnerships and condom use were collected using the Informal Confidential Voting Interview method for 75% of respondents selected at random in the first round of the survey. This method includes procedures to build rapport, ensure a non-judgemental interview approach, and provide reassurance that there are no right or wrong answers to questions of a personal nature, and uses a simple secret voting-box system to reduce embarrassment and guarantee confidentiality in low-development settings [18]. Its use has been shown to be associated with greater disclosure of socially proscribed behaviour in the study population [24]. Process indicators examined comprised changes in knowledge and psychosocial status and indicators of programme coverage and quality.\n\nBODY.METHODS.SAMPLE-SIZE CALCULATIONS:\nInitial sample-size calculations assumed 20% HIV-1 prevalence at baseline, 30% loss to follow-up after 2 y, and 80% power to detect a 40% reduction in HIV-1 incidence in the intervention communities compared with control communities, assuming a background yearly incidence of 2%. Based on six pairs of communities and a co-efficient of variation between communities of 0.15, the required sample size in each community was 1,000. Funding constraints and slower than anticipated implementation of intervention activities led to revisions of the sample size for each community to 800 and the length of follow-up to 3 y, respectively. Assuming a proportionate increase in loss to follow-up to 41%, these arrangements also yielded 80% power to detect a 40% reduction in HIV-1 incidence.\n\nBODY.METHODS.STATISTICAL METHODS:\nTo test the randomisation with small numbers of communities, HIV-1 prevalence, STI history, and socio-demographic characteristics were compared at baseline for study participants in the intervention and control communities, together with uptake of STI treatment and VCT services offered at baseline. Outcome and process indicators were compared for intervention versus control communities. Analysis of the primary outcome was on an intention-to-treat basis. Incident events and person-years at risk of seroconversion were used to calculate HIV-1 incidence rates and unadjusted and adjusted incidence rate ratios (IRR) with 95% confidence intervals (CIs) for each pair of communities. Adjustment was made for sex, 3-y age group, and community-level baseline HIV prevalence. The overall IRRs (unadjusted and adjusted) were taken to be the geometric means of the IRRs for the six pairs of communities. We calculated 95% CIs for each geometric mean as geometric mean ± 1.96 × standard error of the geometric mean. Paired student t-tests on the logarithms of the pair-specific IRRs were used to test whether these differed significantly from unity [25]. The coefficient of variation between communities was calculated based on baseline HIV prevalence using a standard procedure for pair-matched studies [26]. Analyses of prevalence for secondary outcome and process variables were conducted separately for male and female respondents seen at both survey rounds by fitting logistic regression models to the individual-level data and adjusting for community pair and, where available, value of variable at baseline. Since most programme activities were targeted and overall coverage of programme activities was therefore limited, sub-group analyses, adjusted for community pair, were done for HIV-1 incidence and behavioural outcomes to assess the individual-level effects of attendance at programme meetings. Data were entered and validated using SPSS-PC (http://calcnet.mth.cmich.edu/org/spss/index.htm) and data analysis was conducted in Stata version 7 (http://www.stata.com). Statistical tests were double-sided and results were taken to be significant at the 5% level.\n\nBODY.METHODS.ETHICAL APPROVAL:\nAll study participants in the intervention and control communities were offered free VCT for HIV-1, an information sheet on HIV/AIDS, results from a diagnostic test for Trichomonas vaginalis [27] (done at baseline only), and free treatment for T. vaginalis and other STIs from a research nurse. Testing and treatment for T. vaginalis was provided because the prevalence of other curable STIs was low in the study areas [22]. Antibodies reactive with T. vaginalis were detected in DBS eluates following a previously described procedure [27,28]. Written informed consent was sought as a condition of enrolment and continuation in the study. Prior ethical approval was obtained from the Research Council of Zimbabwe, number 02187; the Applied and Qualitative Research Ethics Committee in Oxford, United Kingdom, N97.039; and the UNAIDS Research Ethics Committee, ERC 98/03.\n\nBODY.RESULTS.PARTICIPANT FLOW:\nIn round 1 of the census (July 1998 to February 2000), 5,943 and 6,037 eligible individuals in the intervention (total population size 18,104) and control (18,633) communities, respectively, were selected for recruitment into the study cohort (Figure 2). In round 2, 3 y later (July 2001 to February 2003), 1,044 (23%) and 1,144 (26%) of baseline respondents who were still alive had migrated away from the intervention and control communities, respectively, and were therefore lost to follow-up (Figure 2). At both baseline and follow-up, migrants and non-migrants had similar risks of HIV-1 infection and associated behaviour [29]. Of those still resident in the intervention and control communities, 2,664 (75%) and 2,564 (77%), respectively, were interviewed and blood samples taken for a second time. Temporary absence from the usual place of residence was the main reason for non-participation in the intervention (n = 794, 95%) and control (n = 698, 94%) communities. The overall proportions of baseline respondents followed up at the end of the study were 55% and 56% in the intervention and control communities, respectively. The median follow-up of communities was 3.0 y (range of median within communities, 3.0–3.1). Figure 2Flow-Chart Comparing Participation and Follow-Up Rates in the Intervention and Control CommunitiesIndividuals enrolled in round 1 and still resident in the study communities were considered eligible for participation in round 2.\n\nBODY.RESULTS.BASELINE DATA:\nHIV-1 prevalence was higher in the intervention communities than in the control communities (24% versus 21%, risk ratio 1.13 [95% CI 1.05–1.22], p = 0.001). T. vaginalis infection, secondary school education, and spatial mobility were more common in the control communities, whilst history of genital discharge and uptake of STI treatment and VCT services offered in the survey were low overall but more frequent in the intervention communities (Table 1). However, the differences in each case were small and were unlikely to be clinically meaningful. Table 1 Baseline Characteristics of the Study Populations \n\nBODY.RESULTS.OUTCOMES AND ESTIMATION:\nMedian follow-up per person was 2.9 y (range 1.4–3.9) and 3.0 y (range 1.5–4.1) in the intervention and control communities, respectively. In total, 4,052 individuals had 212 incident events of HIV-1 during 12,009 person-years at risk, giving an HIV-1 incidence rate of 1.77 per 100 person-years at risk. HIV-1 incidence was higher in communities with higher baseline HIV prevalence (IRR 11.49 [95% CI 1.80–73.40], p = 0.010), but this difference disappeared after adjustment for stratification by community type (p = 0.8). HIV-1 incidence was higher in the intervention communities than in the control communities overall, and in each community type, except in the forestry plantations where it was almost identical (Table 2). The difference was not significant after adjustment for sex, age group, and baseline HIV prevalence (IRR 1.27 [95% CI 0.92–1.75], p = 0.012). The observed coefficient of between-community variation was 0.14. Table 2 HIV Prevalence at Baseline and HIV Incidence and IRRs for Intervention Versus Control Communities Looking at outcome indicators for community members (rather than for communities—the unit of randomisation), self-reported STI symptoms were similar in both sets of communities (Table 3). Treatment for STI symptoms in males was effective more frequently in the intervention communities, with men in the intervention community in five of the six matched pairs reporting reduced symptom recurrence. However, more young women in the intervention than in the control communities had started sex, and reports of unprotected sex with a casual partner in the study period were more common in the intervention communities. No differences were observed in consistent condom use with regular partners between the two sets of communities. In the intervention communities, knowledge about HIV/AIDS was enhanced amongst men, and more respondents reported a close relative or family member with AIDS (sex- and age-adjusted prevalence odds ratio 1.22 [95% CI 1.05–1.42], p = 0.009). Slightly more women in the intervention communities reported that condom use within marriage was becoming acceptable, but a greater proportion of men agreed with the statement that \"condoms reduce the pleasure of sex\". Table 3 Biomedical, Sexual Behaviour, and Psychological Outcomes at Follow-up by Residence in the Intervention and Control Communities A total of 63,261 peer-education meetings were held, and 6.8 million condoms were distributed by the programme in the intervention communities (Table 4). Outputs increased over time as new communities entered the programme. However, owing to high inflation and economic decline, the micro-credit income-generating projects proved impossible to implement. We were able to obtain data on STI episodes treated at clinics in the 11 out of 12 study communities that reported cases to the administrative districts of Mutasa and Makoni. In the three intervention communities each in Mutasa and Makoni, STI cases fell by 66% and 51%, respectively, over the 3-y study period. Similar declines of 67% and 52% occurred at clinics in the four control communities in Mutasa and the one control community in Makoni. Coverage of training in syndromic STI management and systemic counselling for nursing staff was high (Table 4). Table 4 Summary of Service Statistics on Programme Output Most of the activities were targeted at high-risk groups. In the general population sample interviewed in the follow-up survey, 1,779 (35%) and 647 (13%) of 5,098 respondents reported attending an HIV/AIDS meeting and a programme meeting, respectively (Table 5). More respondents in the intervention communities than in the control communities attended an HIV/AIDS meeting (41% versus 28%, prevalence rate ratio 1.44 [95% CI 1.33–1.56], p < 0.001) and a programme meeting (20% versus 5%, 4.27 [95% CI 3.52–5.17], p < 0.001), and participation was higher among men than women (prevalence rate ratio 1.32 [95% CI 1.14–1.53], p = 0.002). Fewer women in the intervention communities had heard about HIV/AIDS from external sources or believed that STI drugs were available at their local clinics. Sixty-two (2%) out of 2,528 respondents in the control communities reported spending at least 1 d in the past month in the intervention communities; the equivalent number for respondents in the intervention communities visiting control communities was 70 (3%) out of 2,683. Table 5 Intervention Coverage \n\nBODY.RESULTS.ANCILLARY ANALYSES:\nIn exploratory analysis to assess where the intervention failed, we found that HIV-1 incidence was reduced in males (IRR 0.48 [95% CI 0.24–0.98], p = 0.044) who reported attending programme meetings, after adjustment for the targeting of activities to groups with high-risk behaviour (Table 6). Amongst men who reported one or more casual sexual partners in the past 3 y, fewer of those who attended meetings reported unprotected sex with these partners (prevalence odds ratio 0.45 [95% CI 0.27–0.75], p = 0.002). HIV-1 incidence was not associated with programme participation in women. Table 6 HIV Incidence and IRRs by Meeting Attendance and Sex \n\nBODY.DISCUSSION.INTERPRETATION:\nWe conducted a scientific trial of the feasibility and impact of an integrated community- and clinic-based HIV-1–prevention intervention. The income-generating projects apart, the intervention activities were feasible. The outputs of the programme were extensive with more than 63,000 meetings being conducted and almost 7 million condoms distributed by trained peer educators. Programme messages were considered relevant and realistic. Local STI treatment and counselling services were strengthened and promoted in accordance with the intervention protocol. For male participants, these activities improved HIV/AIDS knowledge, increased the effectiveness of STI treatment, increased consistent condom use with casual partners, and reduced HIV-1 incidence. However, the cluster-RCT results clearly show that the intervention had no positive impact at the community level and suggest possible detrimental effects on the onset of female sexual activity and condom use with casual partners over a 3-y timeframe. Did the cluster-RCT design fail to capture the true effect of the intervention? There are three possibilities: (1) inadequate statistical power; (2) insufficient follow-up; and (3) contamination of intervention within control communities. The study design provided adequate statistical power to detect a meaningful average reduction (40%) in HIV-1 incidence in the intervention versus the control communities over a 3-y observation period. In hindsight, an effect size of 40% was too optimistic and the study had insufficient power to detect a smaller effect. However, there was no trend in the results towards reduced HIV-1 incidence in the intervention communities. Largely due to migration, attrition was close to that anticipated in the study design and was comparable to other recent cohort studies [6,10,9,30]. Migrants had similar characteristics and sexual behaviour to non-migrants [29]. The results of the exploratory sub-group analysis generate the hypothesis that high-risk behaviour was reduced in males attending programme meetings but did not translate into a wider impact on HIV-1 incidence at the population level. Changes in core and bridge populations may take more time to reflect in the general population than was observed in the trial. However, a longer period of follow-up would have increased attrition, and the finding of a possible adverse effect at the population level meant that it would not have been ethical to continue with the same intervention. Future trials of behaviour-change interventions may need to include multiple rounds with phased recruitment and (where interim results are favourable) may need to consider phased intervention implementation. We minimised intervention contamination by selecting physically separated study communities, and movements between intervention and control communities were rare. However, a similar peer-education programme was implemented in one control community (small town), and HIV-1–prevention activity was considerable in all control communities that also had greater access to information from external sources. In some cases, programme messages (e.g., promotion of condom use) conflicted with those of other agencies working in the intervention communities. The effects of these other programmes could have limited our ability to detect a reduction in HIV-1 incidence caused by the current intervention. The absence of an observed effect of the intervention was not explained by differences in HIV-1 prevalence, sexual behaviour, STI cofactors, mobility, or socio-demographic composition at baseline. The earlier sexual debut in females and greater unprotected sex with casual partners seen in the intervention communities during the study period were not present at baseline but could reflect increased willingness to report high-risk behaviours in settings where there was more open discourse about HIV and AIDS. The peer-education programme could have had some effect for male but not for unmarried female participants. Preliminary findings from subsequent qualitative investigations indicate that, in the predominantly rural communities in which the study was conducted, poverty and the associated failure of income-generating projects meant that some peer educators were unable to maintain safer behaviours. Given their increased visibility within the community—intended to enhance their status and self-esteem and, thus, to reinforce their commitment to and role as models for behaviour change—they may, inadvertently, have served as negative role models and, thereby, may have contributed to the greater female early-age sexual activity. Free distribution of condoms by women still engaging in unprotected commercial sex led to their being poorly valued and reinforced their association with promiscuity.\n\nBODY.DISCUSSION.GENERALISABILITY OF FINDINGS:\nEpidemiological context can affect the impact of interventions [31], and structural obstacles can limit the pace and extent to which activities are implemented and the quality of these activities [32]. The HIV-1 epidemic stabilised in eastern Zimbabwe during the study period, with HIV-1 prevalence declining by 40%–50% in young adults [23]. This decline was accompanied by delayed sexual debut, reduced sexual partner change, and consistent condom use with casual partners [33,23]. Prevalence of syphilis, gonorrhoea, and Chlamydia is low, but non-curable herpes simplex virus type 2 remains common [22]. Risk reduction makes transmission more fragile, and an intervention could have a larger effect when set against secular behavioural changes [2]. Mathematical model simulations suggest that there would also be a greater chance of detecting a significant effect of the intervention even though there would be fewer seroconversions to power the calculation [34,35]. Structural obstacles to intervention implementation included HIV/AIDS mortality which disrupted the programme by claiming the lives of two programme coordinators and several of the nursing staff and peer-educators. Economic decline made the income-generating projects unfeasible and reduced the effectiveness of other components of the intervention. We believe that the coverage of the peer-education programme was satisfactory, given the focus on highly sexually active individuals. Meeting coverage could have been under-estimated in the survey since one-to-one discussions and activities at beer halls and other public places may not have been recognised as meetings by those present. However, the high level of spatial mobility limited the number of people who were reached at the required level of intensity and consistency, whilst national shortages of foreign currency restricted fuel and drug supplies, hampered attempts to extend community activities into the more remote rural areas, and disrupted the STI treatment programme in both the intervention and control communities. The intervention that we evaluated could have greater effect where an HIV-1 epidemic is younger, HIV-1 incidence is greater, local sexual networks are less diffuse, background STI control is weak, herpes simplex virus type 2 is less common, population mobility is lower, and/or the socio-economic climate is stable. We cannot rule out an effect of peer education in the urban intervention community since similar activities were implemented in the control community. Targeted peer education may work better in towns where bar-based sex work is more extensive. The absence of reduced HIV-1 incidence in farming estates reinforces doubts raised by the Harare factory workers study [8] concerning the efficacy of workplace peer education.\n\nBODY.DISCUSSION.OVERALL EVIDENCE:\nThese findings are important since the strategies evaluated—i.e., peer education, condom distribution, and syndromic STI management—are theory-based, have the potential for independent effects [11], and are widely promoted [36,37]. Syndromic STI management was effective in a nascent epidemic [6]. However, our disappointing findings echo those from recent trials [9,12] and emphasise the need for alternative strategies of behaviour-change promotion. Social marketing of condoms [14], larger poverty-alleviation programmes to reduce women's reliance on sex work—based on skills training and careful market research rather than on small-scale income-generating projects—and strategies which reach beyond high-activity core groups, such as the Popular Opinion Leader programme [38,39], and client-centred counselling [40], could be more viable and effective in reducing HIV-1 transmission in rural areas. Given the necessary economic conditions, unmarried women may still play a useful role in bar-based programmes since beer halls remain foci for high-risk behaviour [41,15].\n\nBODY.SUPPORTING INFORMATION.TRIAL REGISTRATION:\nThis trial has the registration number ISRNCT00390949 in the International Standard Randomized Controlled Trial Number Register. Found at: http://www.clinicaltrials.gov/ct/show/NCT00390949?order=1\n Protocol S1Protocol(35 KB DOC)Click here for additional data file. Protocol S2Revisions to Protocol(35 KB DOC)Click here for additional data file. Text S1CONSORT Checklist(48 KB DOC)Click here for additional data file. Text S2Ethical Approval, Information Letter, and Consent Forms(2.8 MB PDF)Click here for additional data file.\n\n**Question:** Compared to Baseline communities that received standard Government services what was the result of Baseline communities that received additional education and preventive measures on Secondary school education?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
393
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Incisional hernia after upper abdominal surgery: a randomised controlled trial of midline versus transverse incision\n\n ABSTRACT.OBJECTIVES:\nTo determine whether a transverse incision is an alternative to a midline incision in terms of incisional hernia incidence, surgical site infection, postoperative pain, hospital stay and cosmetics in cholecystectomy.\n\nABSTRACT.SUMMARY BACKGROUND DATA:\nIncisional hernias after midline incision are commonly underestimated but probably complicate between 2 and 20% of all abdominal wall closures. The midline incision is the preferred incision for surgery of the upper abdomen despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia. A RCT was preformed in the pre-laparoscopic cholecystectomy era the data of which were never published.\n\nABSTRACT.METHODS:\nOne hundred and fifty female patients were randomly allocated to cholecystectomy through midline or transverse incision. Early complications, the duration to discharge and the in-hospital use of analgesics was noted. Patients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications such as fistula, wound dehiscence and incisional hernia after a minimum of 12 months follow-up.\n\nABSTRACT.RESULTS:\nTwo percent (1/60) of patients that had undergone the procedure through a transverse incision presented with an incisional hernia as opposed to 14% (9/63) of patients from the midline incision group (P = 0.017). Transverse incisions were found to be significantly shorter than midline incisions and associated with more pleasing appearance. More patients having undergone a midline incision, reported pain on day one, two and three postoperatively than patients from the transverse group. The use of analgesics did not differ between the two groups.\n\nABSTRACT.CONCLUSIONS:\nIn light of our results a transverse incision should, if possible, be considered as the preferred incision in acute and elective surgery of the upper abdomen when laparoscopic surgery is not an option.\n\nBODY.INTRODUCTION:\nThe rate of incisional hernia after midline incision is commonly underestimated but probably lies between 2 and 20% [1–5]. Thus, incisional hernia is a major postoperative problem. The treatment of incisional hernia is complicated by high rates of recurrences. Recently, in a randomised controlled trial published by Burger et al. [6], midline incisional hernia repair has been shown to be associated with a 10-year cumulative recurrence rate of 63 and 32% for suture and mesh repair, respectively. The midline incision is the preferred incision for surgery of the upper abdomen, despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia [7]. Various approaches to opening the abdomen have been advocated over time. The choice for a certain incision is dependent on the exposure necessary for the desired procedure to succeed. A midline incision, be it supraumbilical, infraumbilical or both, is an approach especially suited for emergency and exploratory surgery because of the quick and generous exposure that can be achieved within a few minutes [8, 9]. The avascular nature of the linea alba minimises blood loss during this procedure. A supraumbilical transverse incision may be utilised in case exposure of the upper abdomen is desired. During this incision, the damage inflicted to the segmental arteries and nerves is previously described as being minimal [10]. Previously, only one randomised controlled trial, comparing transverse and true midline incisions, has been published specifically addressing incisional hernia incidence [11]. To determine whether the use of a transverse incision is an alternative to a midline incision for open cholecystectomy in terms of incisional hernia incidence, surgical site infection, postoperative pain and hospital stay, this randomised controlled trial was performed. This trial was conducted in an era when laparoscopic cholecystectomy was not yet available. The possibility of low incisional hernia rates after transverse incisions and the fact that little is known about potential advantages incited us to publish the relevant results of this randomised controlled trial which has been performed in the past and has only been reported in a Dutch thesis by one of the authors (H.L.). The primary endpoint of this study was the incisional hernia incidence after 12 months of follow-up. Secondary endpoints included pain and cosmetic appearance.\n\nBODY.METHODS.PROTOCOL:\nSome 150 consecutive female patients were randomly assigned to a midline or transverse incision as an approach for elective cholecystectomy or combined cholecystectomy and cholangiography (with or without consecutive choledochotomy) (75 and 75 patients, respectively). Emergency procedures were excluded from participation. The sample size is based on an incisional hernia rate reduction from 20 to 6% at a power of 80% and an error rate of 5%. Obtaining informed consent was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975. The investigation reported was performed with informed consent from all of the patients and followed the guidelines for experimental investigation with human subjects and was approved by the medical ethics committee. An independent statistician prepared closed, tamper-proof envelopes containing the random allocation (Fig. 1). Patients were randomised for one of the procedures in theatre through the opening of the envelopes.Fig. 1Flow chart of patient inclusion and follow-up Patient-related factors that were recorded were age, body mass and length and date of operation. Operation-related factors that were recorded were the exact nature of the operation, length of the incision, the thickness of the subcutaneous fat, surgeon performing the procedure, as well as the duration of the operation (skin-to-skin time). In the immediate postoperative period, the use, dose and type of analgesics was recorded and a pain score was administered. The use of analgesics (morphine 7.5 mg intra-muscular injection, 4 h minimum interval between consecutive injections) was monitored for 48 h after surgery; the pain score was administered for the first 6 days after surgery. In patients assigned to surgery through a midline incision, the skin was incised from just below the xyphoid process to just above the umbilicus. The abdominal wall was opened in the midline by incising the linea alba. A Collin type (two-bladed) self-retaining retractor was used to maintain exposure. The abdominal wall was closed in one layer using single polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). The skin was consequently closed using running monofilament nylon sutures (Ethilon; Ethicon, Amersfoort, The Netherlands). Patients randomised for a transverse incision received a right-sided unilateral transverse incision between 3 and 4 cm below the costal margin. The rectus muscle was incised. The fibres of the external and internal obliques and the transverse muscles were separated in the direction of their course. Exposure was achieved through the use of a manually held single-bladed retractor. Closure of the abdominal wall was achieved by closure of the peritoneum and the posterior rectus fascia using a continuous, polygalactin 910 suture (Vicryl; Ethicon, Amersfoort, The Netherlands). The anterior rectus sheath and the fascia of the internal and external transverses were closed using simple interrupted polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). Towards the end of both procedures, a Redon low-vacuum drain catheter was placed, which was guided outside the abdominal cavity approximately 5 cm from the incision. The skin was consequently closed using continuous monofilament nylon suture (Ethilon; Ethicon, Amersfoort, The Netherlands). All patients received a dose of 5,000 IU of sodium–heparin on the morning of the procedure as thrombosis prophylaxis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Pearson χ2 test was used for comparing percentages. In case of small expected numbers, a Fisher's exact test was performed. Continuous variables were analysed using the Mann–Whitney test. A P-value of 0.05 or less (two-sided) was considered to be statistically significant. Means and medians are expressed ±standard deviation (SD).\n\nBODY.METHODS.FOLLOW-UP:\nPatients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications, such as fistula, wound dehiscence and incisional hernia, after a minimum of 12 months follow-up. The patient and the surgeon evaluated the cosmetic results independently and were asked to rate the scar as unsatisfactory, satisfactory or fine. Furthermore, the length and width of the scar was measured.\n\nBODY.RESULTS.STUDY GROUP:\nSome 150 consecutive patients were randomised for participation in this study during an inclusion period from April 1977 until July 1979. Seventy-five patients received a transverse incision and 75 patients a midline incision (Fig. 1). One patient was withdrawn from further follow-up after developing peritonitis and consequent acute respiratory distress syndrome (ARDS) not related to the closure of the abdominal wall 2 days after surgery (transverse incision group). The patients' average age was 51.9 and 51.4 years for the midline and the transverse incision groups, respectively. Furthermore, no differences were found in the body mass and average length between the two groups (Table 1). A cholecystectomy was performed using a transverse incision in 52 patients and utilising a midline incision in 52 patients also. Fifteen and 16 patients, respectively, underwent a combined cholangiography/cholecystectomy. A further 7 and 6 patients, respectively, were treated with a cholangiography/cholecystectomy plus additional choledochotomy and the postexploratory placement of a T-tube.Table 1Baseline characteristics of the patients undergoing surgery, according to study groupVariableMidline incisionTransverse incisionn = 75n = 74Average age (years) ± SD51.9 ± 14.851.4 ± 13.8Average weight (kg) ± SD71.3 ± 14.568 ± 14.3Average length (cm) ± SD163.5 ± 7.8164 ± 7.3\n\nBODY.RESULTS.SURGEON:\nStaff surgeons performed 17% (13/75 patients) of all procedures performed through a midline incision. The remainder of the procedures through a midline incision was carried out under staff surgeon supervision. Staff surgeons performed 14% of all procedures in the transverse incisions study group (10/74 patients) and supervised the remainder. No statistically significant difference was found between the two randomised groups (P = 0.65).\n\nBODY.RESULTS.DURATION OF SURGERY:\nNo significant difference was noted in the skin-to-skin time (in min) for the two different incisions (Table 2). Surgery utilising midline and transverse incision took 56.9 ± 29.3 and 53.2 ± 26.8 min, respectively (P = 0.35). The total duration of the procedures until extubation (in min) did not differ between the midline and transverse incisions (71.0 ± 30.5 and 67.0 ± 27.3, respectively, P = 0.34).Table 2Length of incision, thickness of subcutaneous fat and skin-to-skin time, according to study groupVariableMidline incisionTransverse incisionP-valueLength of incision (mm) ± SDa164 ± 28140 ± 24<0.0001Thickness of subcutaneous fat (mm) ± SDa34.5 ± 13.030.3 ± 12.40.05Skin-to-skin time (min) ± SDa56.9 ± 29.353.2 ± 26.80.40Width of scar (mm) ± SDb8.3 ± 1.43.3 ± 1.2<0.0001aMeasured during surgery in 75 midline and 74 transverse incisionsbMeasured at follow-up in 63 and 60 midline and transverse incisions, respectively\n\nBODY.RESULTS.PAIN AND ANALGESICS:\nSignificantly more patients, having undergone a midline incision, reported pain on day one, two and three postoperatively (P < 0.0001, Table 3). In the midline incision group, 28/75 patients required no or only one dose of analgesics; the remainder required two or more doses. Thirty-one patients operated through a transverse incision required no analgesics or only one dose; 43 patients (the remainder) required two or more. No significant difference in the use of analgesics was found between the groups (P = 0.69).Table 3Postoperatively reported pain, according to study group, shown as the number of patients reporting pain at the time points indicated (percentage), with the remainder of patients reporting no painTime point after surgeryMidline incision n = 75Transverse incision n = 74P-valuePatients reporting pain, n (%)Patients reporting pain, n (%)3–4 h68 (91)60 (81)0.09First day64 (85)39 (53)<0.0001Second day57 (76)23 (31)<0.0001Third day28 (37)9 (12)<0.0001Fourth day5 (7)3 (4)0.72Fifth day0 (0)1 (1)0.50Sixth day0 (0)1 (1)0.50\n\nBODY.RESULTS.COMPLICATIONS:\nPostoperative complications (Table 4) were seen in 16 out of 75 patients (21%) from the midline incision group and in 15% from the transverse incision group (11 patients) (P = 0.30). Briefly, one patient in each group developed cardiac complications; 8 and 6 patients developed urinary retention after the midline and transverse incisions, respectively (P = 0.59). Surgical site infections were diagnosed in 7 and 3 patients, respectively (P = 0.33).Table 4Rate of complications after surgery, according to study group, shown as the number of patients diagnosed with complications (percentage)ComplicationMidline incisionTransverse incisionP-valuen = 75 n (%)n = 75 n (%)Cardiac1 (1)1 (1)1Urinary retention8 (12)6 (8)0.59ARDS01 (1)0.50Surgical site infection7 (9)3 (4)0.33Haemorrhage1 (1)00.50Pneumonia01 (1)0.50Total17 (23)12 (16)0.30\n\nBODY.RESULTS.DISCHARGE:\nForty-five (60%) and 42 (57%) patients from the patients having undergone a midline or a transverse incision, respectively, were discharged on day 6 or 7 postoperatively. The remaining patients from each group left hospital care on day 8 or later. The duration of hospital admission did not differ between the two types of incision (P = 0.74).\n\nBODY.RESULTS.COSMETICS:\nThe width and length of all incisions was measured during the follow-up visit (Table 2). The mean width of the scar after the healing of the midline incisions was found to be 8.3 ± 1.4 mm. The mean width of the scar after the healing of the transverse incisions was measured to be 3.3 ± 1.2 mm. This observed difference is significant (P < 0.0001). The length of the incisions was 140 ± 24 mm and 164 ± 28 mm for the transverse and the midline incisions, respectively. The difference in scar length was found to be significant (P < 0.0001).\n\nBODY.FOLLOW-UP:\nEighty-one percent of all patients operated through a transverse incision were seen during the follow-up examination (n = 60). Of the patients operated through a midline incision, 63 out of 75 were seen at the outpatient clinic (84%). The patients that were lost to follow-up could either not be traced or had deceased (Fig. 1). The minimum follow-up for the evaluation of cosmetic results and hernia incidence was 12 months and the maximum was 36 months.\n\nBODY.FOLLOW-UP.INCISIONAL HERNIA:\nFrom the patients that had undergone the procedure through a transverse incision, one (1/60; 2%) presented with an incisional hernia as opposed to 9 patients from the midline incision group (9/63; 14%); 95% confidence interval (CI) 7.5–25.4%. This difference in hernia incidence is significant (P = 0.017). No significant correlation was found between the incisional hernia rate and surgical site infection (P = 0.07).\n\nBODY.FOLLOW-UP.SUBJECTIVE COSMETICS:\nPatients and surgeons alike were asked to rate the appearance of the scar during the postoperative follow-up outpatient clinic visit. Both the surgeons and the patients found the scar resulting from the transverse incision to be more cosmetically pleasing (P < 0.0001 and P = 0.03, respectively, Table 5).Table 5Number of patients and surgeons rating the cosmetics of a scar at follow-upScoreMidline incision (n = 63)Transverse incision (n = 60)Patients, n (%)Surgeons, n (%)Patients, n (%)Surgeons, n (%)Unsatisfactory6 (10)25 (40)2 (3)6 (10)Satisfactory16 (25)27 (43)9 (15)12 (20)Fine41 (65)11 (17)49 (82)42 (70)Total63636060Difference between type of incision: patients P = 0.03; surgeons P < 0.0001\n\nBODY.DISCUSSION:\nThis prospective randomised study of transverse and midline incisions for open cholecystectomy shows that a significant reduction of incisional hernia incidence can be achieved through the use of a transverse incision. Only one other study (published in 1980) reported the incidence of incisional hernia after upper abdominal midline and unilateral transverse incision in a randomised trial. No difference between the two techniques (8 and 6% incisional hernia, respectively) was found, but the relatively short follow-up of 6 months, however, may be held accountable for this finding [11]. Three retrospective studies showed rates of incisional hernia of 3.2, 5.4 and 16.5% for midline incision and 1.3, 6.7 and 13.4% for transverse incision without statistically significant differences [12–14]. The possible reason for the rather high incidence of incisional hernia in the midline incision group (14%) may lie in the use of resorbable 910 polygalactin sutures. Nevertheless, the use of the same type of resorbable suture in the closure of the transverse incisions resulted in a 2% hernia rate. There is evidence for the importance of proper technique and choice of incision as a means to reduce incisional hernia being more important than the use of suture material [7]. Furthermore, as mentioned above, it is known that the incidence of incisional hernia in the case of a midline incision lies between 2 and 20%. From our data, the NNT (numbers needed to treat) is calculated to be 8 (95% CI 5–30) and the RRR (relative risk reduction) is 88% (95% CI 23–100%). Luijendijk et al. [15] have published a hernia rate of 2% after Pfannenstiel incisions closed using 910 polygalactin, which is in agreement with our findings in the patients randomised for a transverse incision, emphasising the importance of the incision over the choice of suture material. In our study, significantly fewer patients reported pain on day 1, 2 and 3 after transverse incisions, a result that was also described by other authors [16, 17]. Greenall et al. [18] published a contradictory report (in 1980) in which no significant difference in postoperative pain was found between midline and transverse incisions. The previously mentioned study, however, only analysed 46 out of 572 patients (8%) with regard to pain, which may explain the finding. In the same way, Lacy et al. suspended visual analogue pain scoring in a study comparing midline and transverse incisions for abdominal aortic surgery. Remarkably, the two groups in our study did not differ in terms of postoperative analgesia, a finding that is also reported by Lindgren et al. [17] and Donati et al. [19]. In our study, surgeons as well as patients were significantly more satisfied with the aesthetic appearance after a transverse in comparison with a midline incision. The scars after transverse incisions were found to be significantly shorter and less wide than the midline incisions, which may account for the observed difference. A possible reason for this is that a transverse incision is executed parallel to the prevailing direction of the skin lines on the abdomen and, therefore, the tension on the wound and consequent scar is low. Cholecystectomy has come a long way since this trial. The introduction and widespread acceptance of laparoscopic technique as the treatment of choice has rendered open cholecystectomy to be an operation for exceptional, and perhaps surgically difficult, circumstances. Nowadays, the study reported is hardly feasible, yet, the results are still applicable and very relevant for other surgical procedures in the (upper) abdomen. Knowledge of the favourable results of a transverse incision may aid surgeons in their choice when finding themselves in the unfortunate position of needing conversion to open cholecystectomy. In conclusion, this investigation on transverse incisions might be helpful in reducing the incidence of incisional hernia in patients after open cholecystectomy. The midline incision is a preferred manner to achieve exposure of the abdominal cavity and is considered to be easily performed and quick. Although the midline incision is generally accepted, the incidence of incisional hernias is surprisingly high [1–5]. The choice for a particular incision should not only be based on exposure, but also on hernia incidence reduction, especially since recurrence rates after hernia repair are reported to be very high. Furthermore, the recurrence rate after incisional hernia repair is a disappointing 63 and 32% for suture and mesh repair, respectively [6]. In the light of these results, incisional hernia prevention is warranted. In this investigation, it is shown that a significant reduction (from 14.5 to 1.7%) of incisional hernia incidence was achieved by using a transverse incision. Hence, a transverse incision should be considered as the preferred incision in acute and elective surgery of the upper abdomen in which laparoscopic surgery is not an option. Full exposure of two quadrants is feasible through the use of a unilateral transverse incision in, for example, biliary, bariatric, liver and colonic surgery. The transverse incision should be part of the abdominal surgeon's armamentarium and is a preferable incision to prevent the high incidence of incisional hernia after abdominal surgery.\n\n**Question:** Compared to Midline incision what was the result of Transverse incision on Pain on day three after surgery?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
473
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomized controlled trial of clopidogrel to prevent primary arteriovenous fistula failure in hemodialysis patients\n\n ABSTRACT:\nThe optimal vascular access for chronic maintenance hemodialysis is the arteriovenous fistula (AVF). Several studies suggest a role for antiplatelet agents in the prevention of primary AVF failure. A double-blind, randomized trial was conducted to assess the efficacy and safety of clopidogrel in hemodialysis patients. Ninety three patients were randomized to receive 75 mg/daily of clopidogrel or placebo. The treatment was initiated 7–10 days prior to scheduled access surgery and continued up to six weeks postoperatively, and then patients were monitored for six months. The primary outcome was AVF failure eight weeks after fistula creation. With a permuted block randomization schedule, 46 patients received clopidogrel and 47 patients received control placebo. The primary AVF failures at two months were 21.6% in placebo group and 5.2% in clopidogrel group (P = 0.03). The hazard ratio for the incidence of primary AVF failure was 0.72 (CI 95%, 0.41–1.01). Analysis of covariables indicated that this effect occurred principally as a result of clopidogrel administration. First hemodialysis from newly created AVF in clopidogrel group was significantly more successful than placebo group (P = 0.008). No life-threatening adverse event or severe bleeding was recorded in both groups. Clopidogrel seems to be effective and safe for prevention of primary AVF failure in hemodialysis patients.\n\nBODY.INTRODUCTION:\nPerformance of a successful hemodialysis procedure requires a functional vascular access. The preferred type of access is a native fistula because they have the lowest risk of complications, lowest need for intervention, and the best long-term patency.[1] Once an arteriovenous fistula (AVF) is created, it must develop to the point that it is usable. Vascular access dysfunction is one of the most important causes of morbidity in the hemodialysis population.[2] Primary failure of native fistulas occurs as a result of either thrombosis within the first several weeks following surgical creation (early thrombosis), or inadequate maturation of the vein.[3] The primary AVF failure rate is approximately 9–50%.[4–6] Fistula evaluation 4–6 weeks after creation should be considered mandatory.[7] The clinical manifestations of early fistula failure are failure to develop adequately to permit repetitive cannulation for dialysis, inadequate flow to support dialysis, and thrombosis. The characteristic pathology that results in AVF failure is a juxta-anastomotic stenosis.[8] Whether primary AVF failure can be prevented with pharmacologic agents has not been extensively examined. Several studies have indicated that the frequency of AVF failure and loss can be reduced with antiplatelet agents.[9–18] Although those results are encouraging, they do not provide conclusive evidence of the efficacy of antiplatelet agents among patients with AVF. On the basis of these considerations, we performed a randomized, double-blind trial to test the hypothesis that clopidogrel, would prevent primary AVF failure among hemodialysis patients.\n\nBODY.MATERIALS AND METHODS:\nThe study was a randomized, double-blind trial. Patients of minimum age 18 years close to the initiation of chronic hemodialysis requiring AVF, and patients who were undergoing chronic hemodialysis but requiring a new AVF at a different site were the inclusion criteria. Exclusion criteria included patients with a history of gastrointestinal bleeding or previous bleeding episodes within six months prior to initiation of the study, patients already receiving chronic anticoagulation therapy (antiplatelet agents or warfarin), patients with terminal or life-threatening disease, pregnancy, malignant hypertension, a platelet count of <100,000/mm3, and other demonstrated medical conditions that would make antiplatelet therapy dangerous. All patients were recruited from the outpatient hemodialysis program at Jondi Shapour University, and the same surgical team placed all fistulas. Randomization was performed centrally, by the coordinating center. The randomization was stratified according to medical center with a permuted block scheme, with a block size of four and equal allocation. After identifying and obtaining consent from eligible participants, the local study coordinator telephoned the coordinating center to obtain a randomization number, which corresponded to a specific medication bottle available in the local research pharmacy. Neither the details of the randomization sequence nor the identity of the medication assignment was known to the participant or any personnel at the participating sites. Consenting eligible participants were randomized to receive either clopidogrel (75 mg/daily) or matching placebo (47 patients in placebo group and 46 patients in clopidogrel group). The treatment was initiated 7–10 days prior to scheduled access surgery and continued up to six weeks postoperatively, with full approval by the Jondi Shapour University Institutional Review Board. Patients were monitored for occurrence of complications, need for hemodialysis until six months after trial. The primary null hypothesis of the study was that clopidogrel would have no preventive effect on the incidence of primary AVF failure. The primary outcome was AVF failure eight weeks after fistula creation. Fistula failure was determined by a member of the study team (either the study coordinator or the site principal investigator), who was blinded to the treatment allocation. The fistula was classified as patent if a bruit was detectable along the vein at least 8 cm proximal to the arteriovenous anastomosis throughout systole and diastole. Secondary outcomes included adverse events and mortality. Platelet hemostatic function was measured monthly, as whole-blood bleeding time. Routine blood chemistry profiles, dialysis prescriptions, body weights, medications, and complications for all patients were recorded in a computerized database, and thus were available for inclusion in the final data analysis. We obtained detailed information on bleeding events. Assessment of the severity of bleeding episodes was performed by a panel blinded to the treatment assignments. Discontinuation of the study drug following any bleeding event was the rule.\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nData are presented as mean ± SE for continuous variables and as percentages for categorical variables. All data were tested for normality using the method of Kolmogorov-Smirnov. Statistical analyses were performed on an intention-to-treat basis. The t test was used when the means of two groups were compared. On the basis of intention-to-treat principles, all other participants for whom study medications were discontinued continued to be monitored according to the protocol. All hypothesis tests were conducted by using a significance level of 0.05 (two-sided). The statistical program SPSS version 13 (SPSS, Chicago, IL) was used to analyze the data.\n\nBODY.RESULTS:\nBetween December 2006 and March 2008, a total of 93 patients met the study criteria for enrollment. The demographic and baseline laboratory findings for each group are summarized in Table 1. There were no significant differences in covariables between groups. None of the covariables (covariables thought to influence the risk of AVF included age, gender, diabetes mellitus, bleeding time, and blood pressure) measured at baseline or follow-up times was correlated with the development of AVF failure. All patients were taking study medication 7–10 days prior to surgery. However, the medication had to be discontinued prematurely in 18 (19.4%) patients because of no intention to complete trial by six cases and complication occurring in 11 cases. One patient died before the end of trial. Two patients in clopidogrel group and three patients in placebo group were enrolled after creation of a second fistula. Prior AVFs failed because of late fistula failure in all of them. Finally, 75 patients completed trial (38 patients in clopidogrel and 37 patients in placebo groups). None of the patients was lost to follow-up. Table 1 Patients characteristics at baseline Characteristic All patients (%) Clopidogrel (%) Placebo (%) P value Age (years) 45.03 ± 1.2 44.23 ± 3.36 45.8 ± 2.84 0.47 Males 48 (51.6) 24 (25.8) 24 (25.8) 0.91 Diabetes mellitus 25 (26.9) 14 (15.1) 11 (11.8) 0.44 Patients on hemodialysis 63 (67.7) 29 (31.2) 34 (36.6) 0.33 Drug use by the end of trial 75 (80.6) 38 (40.9) 37 (39.8) 0.63 Five and three fistulae in clopidogrel-treated and placebo groups were proximal AVFs. Two patients in clopidogrel-treated group (5.26%) showed an early failure of the AVF compared to eight patients (21.62%) in placebo group (P < 0.05) [Table 2]. There was significant benefit of active treatment in the prevention of AVF failure. The hazard ratio for the incidence of primary AVF failure was 0.72 (CI 95%, 0.41–1.01). Fistula locations had no effects on the development of AVF failure in either group. During follow-up from eight weeks to six months no important complication occurred. Sixty patients required hemodialysis during this time. Hemodialysis was tried in 26 patients of clopidogrel group within six months after AVF creation and it was successful in 24 patients (92.3%). In placebo group, hemodialysis from new AVF was tried in 34 patients and it was successful in 24 cases (70.5%). First hemodialysis from newly created AVF in clopidogrel group was significantly more successful than placebo group (P = 0.008). The cumulative incidence of bleeding was similar between groups [Table 3]. No severe bleeding episode such as intracranial hemorrhage or life-threatening bleeding was recorded during active treatment period. There were no deaths attributable to bleeding in either treatment group. Bleeding times were similar at baseline for clopidogrel group (8.1 ± 0.3 min) and placebo group (8.4 ± 0.6 min) and remained stable (8.5 ± 0.4 min for clopidogrel group and 8.6 ± 0.3 min for placebo group) throughout the study period (P = 0.21). In addition, there were no differences between baseline and follow-up hematocrit values or changes in recombinant human erythropoietin doses during the study period for either group. Table 2 Clinical outcomes at the end of trial Outcome All patients (%) Clopidogrel (%) Placebo (%) P value Primary AVF failure Drug use by the end of trial 10 (26.88) 2 (5.26) 8 (21.62) 0.03 Premature cession of drug 2 (2.15) 0 (0) 2 (2.15) Table 3 Complications and mortality Complication All patients (%) Clopidogrel (%) Placebo (%) P value GI bleeding 5 (5.3) 2 (2.1) 3 (3.2) 0.31 Non GI tract bleeding 9 (9.6) 5 (5.3) 4 (4.3) 0.63 Death events 4 (4.3) 2 (2.1) 2 (2.1) 0.47 \n\nBODY.DISCUSSION:\nChronic maintenance hemodialysis requires stable and repetitive access to the intravascular compartment in order to deliver high rates of blood flow to the extracorporeal circuit. The AVF is the method of choice for the establishment of hemodialysis vascular access in patients with endstage renal disease.[1] The fistula is relatively simple to perform under local anesthesia and, when successfully established, is easy to needle and relatively free from complications. However, a significant proportion (9–50%) of fistulas fails early within three months of surgery.[4–6] An AVF with primary failure is defined as a fistula that never provided reliable hemodialysis.[18] Vascular access failure is the most common reason for hospitalization among hemodialysis patients.[19] The typical lesion of access thrombosis is neointimal vascular smooth muscle cell proliferation in the anastomotic draining vein. Platelet activation from endothelial injury may play an important role in stimulating platelet aggregators such as PDGF and thromboxane A2, in addition to directly stimulating vascular intimal proliferation.[18] Therefore, the therapeutic potential of antiplatelet agents including aspirin, sulfinpyrazone, dipyridamole, and ticlopidine were tested.[9–17] Our study was undertaken to determine the effects of clopidogrel on the incidence of primary AVF failure among newly created AVFs. We observed a significant risk reduction in the primary AVF failure in active treatment group compared to placebo group. The results of our analysis suggest that daily administration of 75 mg of clopidogrel, beginning 7–10 days prior to AVF creation, was successful in preventing the development of vascular failure with acceptable side effects. We were unable to account for the differences observed in our clinical trial on the basis of age, gender, diabetes mellitus, bleeding times, hematocrit levels, or weekly doses of recombinant human erythropoietin. This finding suggests that the risk reduction in vascular failure might be attributed to clopidogrel administration. Our results are supported by recent Cochrane report.[20] This meta-analysis confirmed the beneficial effect of antiplatelet treatment as an adjuvant to increase the patency of AVFs in the short term. However, there have been multiple studies showing variable results of antiplatelet agents on vascular access failure. Yevzlin et al., showed a negative association between antiplatelet therapy and access patency.[21] In this trial, usage of some drugs was not associated with significant risk reduction in access failure. Moreover, antiplatelet therapy in patients with access failure was associated with significantly increased risk of access failure. Kaufman et al., demonstrated no change in the risk of graft thrombosis with aspirin plus clopidogrel therapy.[22] They also noted that in chronic hemodialysis patients there is a trend toward increased thrombosis with aspirin therapy. Also Kooistra et al., were unable to demonstrate a benefit with low-dose aspirin on thrombovascular events in 68 hemodialysis patients.[23] In contrast, combining all the studies of antiplatelet agents in patients with new primary fistulae in which there was a placebo control group, the thrombosis rate in the control group was significantly higher than active treatment group.[20] Three trials compared ticlopidine with placebo with a total number of 312 participants undergoing AVF formation or graft interposition. All three trials comparing ticlopidine with placebo favored treatment in both AVF and vascular grafts. In the Fiskerstrand study, two out of six patients in the ticlopidine group compared with five out of nine in the placebo group, developed fistulae thromboses at one month (OR = 0.40, CI 95%, 0.05– 3.42).[15] In the earlier Grontoft study, only two out of 19 who received treatment developed fistulae thromboses compared to eight out of 17 on placebo (OR = 0.13, CI 95%, 0.02–0.76).[13] In Gontoft 1998, 16 out of 130 patients who received ticlopidine developed thromboses in the fistulae compared with 25 out of 131 in the placebo group (OR = 0.60, CI 95%, 0.30–1.18).[14] The overall result of the meta-analysis also favored treatment (OR = 0.47, CI 95%, 0.26–0.85). The overall P-value was 0.01.[20] We also assessed the effect of clopidogrel on the successful initiation of hemodialysis via AVF. The rate of the performing successful first hemodialysis via AVF was greater in clopidogrel group. The overall incidence of bleeding events was 15% in our study. According to Kaufman et al., we expected to encounter with 16 episodes of bleeding during six months in our patients.[22] However, the incidence of bleeding episodes was lower than expected. This finding might be related to restrictive exclusion criteria. This study is the first carefully monitored trial that was limited to AVF. It is a well-known fact that AVFs have lower thrombosis rates compared with AVF grafts. In multiple studies, AVFs have been shown to have significantly improved patency rates and lower complication and infection rates. Some studies pointed out that antiplatelet therapy might be more effective in fistulas than in arteriovenous grafts.[1422] We studied this hypotheses for first time. Our results suggested that clopidogrel is an effective therapy in prevention of primary AVF failure. The major limitation of our study is the small number of patients, but in view of the promising results we believe that our preliminary findings deserved prompt communication. However, the data must be interpreted with caution because the pharmacological approach to prevent vascular access thrombosis in hemodialysis is still in its infancy. Overall, the effect of antiplatelet agents on vascular access patency needs further investigation. A prospective randomized controlled trial with larger number of patients is warranted. Currently the National Institutes of Health sponsored Dialysis Access Consortium is conducting an ongoing double-blind multicenter randomized evaluation of clopidogrel in AVF patency.[24]\n\nBODY.CONCLUSION:\nPrimary AVF failure remains a major problem for hemodialysis patients. Vascular access thrombosis prophylaxis needs to start early in the ESRD patient. Clopidogrel, beginning 7–10 days prior to AVF creation and continuing for six weeks, seems to prevent primary AVF failure with acceptable side effects in selected hemodialysis patients.\n\n**Question:** Compared to Placebo what was the result of Clopidogrel on First hemodialysis success?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
228
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A heart failure self-management program for patients of all literacy levels: A randomized, controlled trial [ISRCTN11535170]\n\n ABSTRACT.BACKGROUND:\nSelf-management programs for patients with heart failure can reduce hospitalizations and mortality. However, no programs have analyzed their usefulness for patients with low literacy. We compared the efficacy of a heart failure self-management program designed for patients with low literacy versus usual care.\n\nABSTRACT.METHODS:\nWe performed a 12-month randomized controlled trial. From November 2001 to April 2003, we enrolled participants aged 30–80, who had heart failure and took furosemide. Intervention patients received education on self-care emphasizing daily weight measurement, diuretic dose self-adjustment, and symptom recognition and response. Picture-based educational materials, a digital scale, and scheduled telephone follow-up were provided to reinforce adherence. Control patients received a generic heart failure brochure and usual care. Primary outcomes were combined hospitalization or death, and heart failure-related quality of life.\n\nABSTRACT.RESULTS:\n123 patients (64 control, 59 intervention) participated; 41% had inadequate literacy. Patients in the intervention group had a lower rate of hospitalization or death (crude incidence rate ratio (IRR) = 0.69; CI 0.4, 1.2; adjusted IRR = 0.53; CI 0.32, 0.89). This difference was larger for patients with low literacy (IRR = 0.39; CI 0.16, 0.91) than for higher literacy (IRR = 0.56; CI 0.3, 1.04), but the interaction was not statistically significant. At 12 months, more patients in the intervention group reported monitoring weights daily (79% vs. 29%, p < 0.0001). After adjusting for baseline demographic and treatment differences, we found no difference in heart failure-related quality of life at 12 months (difference = -2; CI -5, +9).\n\nABSTRACT.CONCLUSION:\nA primary care-based heart failure self-management program designed for patients with low literacy reduces the risk of hospitalizations or death.\n\nBODY.BACKGROUND:\nLimited literacy skills are common among adults in the United States [1]. Low literacy is associated with increased risk of hospitalization and worse control of chronic diseases [1-4]. Heart failure is a common chronic illness requiring multiple medications and significant self-care. Heart failure is the leading cause of hospitalization in the Medicare population [5]. The complexity of care for heart failure puts people with low literacy at considerable risk for adverse outcomes including hospitalization, worse quality of life, and mortality. Heart failure disease-management interventions appear effective in reducing rehospitalizations and improving quality of life [6]. Most randomized clinical trials of heart failure disease management completed over the last 10 years have enrolled patients during, or shortly after, hospitalization and reported the outcome of readmission [6]. Although the designs of these programs vary, several have tested education and support to enhance patient self-management as the main component of the intervention [7-10]. The content of self-management education usually includes teaching to understand medications, reduce salt intake, monitor daily weights, and recognize symptoms. Most programs include structured follow-up either by home visit, phone, or mail. Only a few, uncontrolled studies specifically ask patients to self-adjust their diuretics [11,12]. Heart failure self-management programs may be particularly effective for vulnerable populations, such as those with poor literacy [13,14]. However, to our knowledge, no previous studies have explicitly examined the role of self-management programs in a low literacy population. A recently published study and accompanying editorial suggested that such self-management support may be most effective among vulnerable populations [13,14]. Low literacy may represent a vulnerability for which we should design our programs. Disease management for patients with low literacy may require refined approaches to foster self-management skills. We developed a heart failure self-management program for use by patients with a variety of literacy levels [15]. We performed a randomized controlled trial comparing our self-management program to usual care among outpatients to test if the program could reduce hospitalizations and improve heart failure-related quality of life.\n\nBODY.METHODS.STUDY DESIGN:\nWe conducted a randomized controlled trial in the University of North Carolina (UNC) General Internal Medicine Practice, which serves a wide socioeconomic range of patients. The practice, staffed by over 20 attending faculty and 70 medical residents, cares for over 500 patients with heart failure.\n\nBODY.METHODS.STUDY PARTICIPANTS:\nTo be eligible, patients had to have a clinical diagnosis of heart failure confirmed by their primary provider through a direct interview, and one of the following: 1) chest x-ray findings consistent with heart failure, 2) ejection fraction <40% by any method, or 3) a history of peripheral edema. They also had to have New York Heart Association class II-IV symptoms within the last 3 months. Patients were excluded if they had moderate to severe dementia (based on the treating physician's clinical judgment), terminal illness with life expectancy less than 6 months, severe hearing impairment, blindness, current substance abuse, a serum creatinine >4 mg/dl or on dialysis, a requirement of supplemental oxygen at home, lacked a telephone, or were scheduled to undergo cardiac surgery or awaiting heart transplant. We did not exclude patients on the basis of literacy skill because we felt patients of all literacy levels would benefit from this intervention. Patients who read well often prefer information presented in an easy-to-read format [16]. We accepted referrals from the cardiology clinic if patients met eligibility criteria. This study was approved by the Institutional Review Board of the UNC School of Medicine, and all participants gave informed consent prior to enrollment. For participants who could not adequately read the informed consent document, the research assistant read and explained it to them. They were asked to sign a short form indicating that the informed consent document was reviewed and they agreed to participate. When the short form was used, a witness was asked to attest to the adequacy of the consent process.\n\nBODY.METHODS.PROCEDURES:\nParticipants were recruited between November 2001 and April 2003 from the General Internal Medicine and Cardiology Practices at UNC Hospitals. A trained research assistant screened all patients age 30–80 for use of furosemide. If the patient was on furosemide, their physician was queried about the presence of heart failure. If the patient met eligibility criteria and consented to participate, baseline data were collected. We then randomized patients by concealed allocation based on a random number generator to receive the intervention or usual care. All patients were followed for one year. All data were collected in the General Internal Medicine Practice.\n\nBODY.METHODS.INTERVENTION:\nThe intervention was delivered in the General Internal Medicine Practice. The educational materials and disease management intervention were previously described in detail, and the intervention is summarized here [15]. The intervention began with a 1-hour educational session with a clinical pharmacist or health educator during a regular clinic visit. Patients were given an educational booklet designed for low literacy patients (written below the 6th grade level and extensively pre-tested in focus groups and a pilot study [15]) and a digital scale. The educator and patient reviewed the booklet together, including management scenarios. As part of the educational session, patients were taught to identify signs of heart failure exacerbation, perform daily weight assessment, and adjust their diuretic dose. Because this intervention was aimed at patients with low literacy, the health educator used pedagogic strategies felt to improve comprehension for patients with low literacy [17]. For example, the educator had the patient teach back the information [18], engaged the patient in filling out the notebook, and used brainstorming to help the patient incorporate self-management into their lives. The educator, patient, and primary care physician collaborated to establish the patient's \"good weight\" (i.e., where the patient's heart failure was stable) and baseline diuretic dose. The educator then filled in the management plan in the patient's notebook to help the patient better manage weight fluctuations and self-adjust the diuretic dose based on weight (Figure 1). The general plan involved doubling the dosage if weight went up and halving it if weight went down. The program coordinator then made scheduled follow-up phone calls (days 3, 7, 14, 21, 28, 56) and monthly during months 3–6. The follow-up phone calls, each lasting 5–15 minutes, were designed to reinforce the educational session and provide motivation for the patients. Again, the program coordinator had the patient describe their self-management practices and offered feedback to improve them. Patients experiencing worsening symptoms were scheduled acute visits with their physician. We did not provide specialized nursing assessment, care or medication advice beyond diuretic dosing. If the patient's doctor determined that the good weight had changed, the program coordinator would revise the care plan with the patient. Patients enrolled in the control group received a general heart failure education pamphlet written at approximately the 7th grade level, and continued with usual care from their primary physician. The only contacts between the research team and the control patients were at enrollment and data collection.\n\nBODY.METHODS.MEASURES:\nWe assessed outcomes at 6 and 12 months through in-person interviews and review of the medical record. To be sensitive to low literacy, all interviews were conducted verbally by a trained research assistant. If patients were unable to come to clinic for the interview, it was conducted by phone. The research assistant was not blinded to the patient's study group. Primary outcomes were death or all-cause readmission and heart failure-related quality of life at the end of 12 months. Data on hospitalization dates were obtained from the patient and confirmed by medical chart review. All admissions, regardless of hospital or cause, were counted. For exploratory analyses, we classified reason for admission as cardiac or non-cardiac. Cardiac admissions included those primarily for heart failure (e.g., shortness of breath and edema relieved by diuresis) and other cardiac causes such as chest pain, arrhythmias, or syncope. Cause of admission was determined by chart review by one of the authors (D.D.) who was blinded to treatment allocation. Heart failure-related quality of life was assessed using a modified version of the Minnesota Living with Heart Failure Questionnaire (MLHF). The MLHF is a 21 question instrument with a 6-point Likert scale response option and scores ranging from 0 to 105 [19]. In pilot testing of the MLHF, we found that low literacy patients had trouble answering questions with the standard 6-point Likert scale [15], so we changed the response scale to 4 points, using a visual display with stars to represent increasing severity. The 4-point Likert scale was scored as 0, 1, 3, and 5 to approximate standard scores on the MLHF. Secondary measures included heart failure self-efficacy, knowledge, and behaviors. Self-efficacy was measured with an 8 item scale developed for the behaviors needed in this trial as suggested by self-efficacy scale developers [20]. Respondents used a 4-point Likert scale yielding a total score from 0–24. We assessed heart failure knowledge using a knowledge test previously developed for this population [15], Heart failure self-management behavior was assessed by asking patients how often they weighed themselves. We used patient self-report and the medical chart to measure age, gender, race, insurance status, income, years of education, medication use, years with heart failure, and the presence of co-morbidities. We measured literacy using the Short Test of Functional Health Literacy in Adults (S-TOFHLA) [21], a well-validated scale that correlates well with other measures of reading ability [22]. Patients who scored in the inadequate literacy range on the S-TOFHLA were considered to have \"low literacy.\" The cut-point for inadequate literacy is roughly analogous to the 4th grade reading level.\n\nBODY.METHODS.SAMPLE SIZE:\nSample size was based on pilot study results showing a 9-point improvement in MLHF scores over 3-months with the intervention [15]. Detecting a 9-point difference between intervention and control group with 80% power and alpha set at 0.05 required 70 patients per group. We aimed to enroll 150 patients to account for possible attrition, but stopped at 127 because funding ended. We did not power this study to detect differences in hospitalization, but studies with even smaller numbers of patients have shown a difference for that outcome [7].\n\nBODY.METHODS.STATISTICAL METHODS:\nPatients who did not return any phone calls and did not return for follow-up assessment did not have outcome data for analysis. Patients who withdrew from the study were censored at the time of withdrawal; any data collected prior to withdrawal were included in the analysis. Baseline differences between groups were assessed using t-tests for continuous outcomes and chi-squared tests for categorical outcomes. For MLHF, heart failure self-efficacy and heart failure knowledge, we used two-sample t-tests. Non-parametric tests were also performed for all comparisons, but results did not differ from the parametric tests. Because of the small sample size and the unequal distribution of baseline characteristics, we adjusted for baseline differences using linear regression. Analyses of self-reported outcomes, such as MLHF, excluded patients who died or withdrew from the study before 6 or 12 month data was collected. For hospitalization or death, we used negative binomial regression and censored patients at the time of death or withdrawal from the study. Based on the likelihood ratio test, negative binomial regression was a better fit for the data than a Poisson regression. Additionally, the Vuong test confirmed that a zero-inflated model was inappropriate [23]. Because of uneven distribution of baseline characteristics, we performed analysis of covariance with negative binomial regression to control for baseline differences [24]. We identified the following variables that could contribute to hospitalization or death based on previous studies: age, race, gender, literacy level, hypertension, diabetes, creatinine, MLHF score, use of β-blockers, angiotensin converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), use of digoxin, and presence of systolic dysfunction [7,25]. Variables were not included in the model if the relationship between the variable and the study group or outcome had a p value greater than 0.3. We started with a model including the following items to arrive at the best point estimate: age, gender, hypertension, creatinine, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs. We then eliminated variables from the model if p > 0.30, and if the point estimate remained within 10% of the initial estimate. We prespecified a subgroup analysis in patients with low literacy to analyze if the intervention had a similar effect. The same analysis described above was repeated for the subgroups of patients with low literacy and those with higher literacy. The initial multivariate model for the subgroups analysis included: age, gender, hypertension, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs.\n\nBODY.METHODS.ROLE OF THE FUNDING SOURCE:\nThe funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.\n\nBODY.RESULTS.PATIENTS:\nWe screened 919 patients on furosemide between November 2001 and April 2003. 127 met eligibility criteria and agreed to participate (Figure 2). Of those not enrolled, 407 did not have heart failure according to their physician, 367 did not meet eligibility criteria and 27 declined to participate. Of those randomized to the control group, 1 never returned after the first visit, 1 withdrew during the study and 4 died during the study. Follow-up was completed for all of the remaining participants (98%) (Figure 3). Of those randomized to the intervention group, 3 never returned after the first visit, 4 withdrew during the study and 3 died during the study. Follow-up was completed for all of the remaining participants (93%). At baseline, most characteristics were similar between the two groups (Table 1). However, the control group had more participants with hypertension, fewer with diabetes, and fewer men. Of heart failure related characteristics, the control group had more participants with systolic dysfunction, and taking β-blockers. The intervention group had more participants taking ACE inhibitors or ARBs, and digoxin. Regardless of these differences, none were statistically significant. The control group did have statistically significantly higher baseline MLHF scores representing worse symptoms at baseline.\n\nBODY.RESULTS.HOSPITALIZATION OR DEATH:\nThere were 68 hospitalizations (65) or deaths (3) in the intervention group and 111 (107 hospitalizations, 4 deaths) in the control group. The crude all-cause hospital admission or death incidence rate ratio (IRR) was 0.69 (95% CI 0.40, 1.19). After adjusting for age, gender, use of ACE inhibitor or ARB, use of a β-blocker, presence of hypertension, and baseline MLHF, intervention patients were less likely to have the outcome (IRR = 0.53; 95% CI 0.32, 0.89). 61% of patients in the control group had at least one hospitalization or died, and 42% of patients in the intervention group had at least 1 hospitalization or died (p = 0.13).\n\nBODY.RESULTS.CARDIAC HOSPITALIZATION:\n39% of patients in the control group and 34% of patients in the intervention group had at least one hospitalization for cardiac causes (p = 0.55). The unadjusted IRR was 0.79 (95% CI 0.42, 1.5). After adjusting for baseline differences, the IRR was 0.85 (95% CI 0.44, 1.7).\n\nBODY.RESULTS.HEART FAILURE-RELATED QUALITY OF LIFE:\nIn unadjusted analysis, the control group, on average, improved 5 points on the MLHF and the intervention group improved by 1 point. The difference was not statistically significant (3.5 points, 95% CI 11, -4, p = 0.36). After adjusting for baseline differences between the groups, the difference was 2 points (95% CI 9, -5, p = 0.59) suggesting no effect on heart failure-related quality of life.\n\nBODY.RESULTS.OTHER OUTCOMES.KNOWLEDGE:\nHeart failure related knowledge improved more in the intervention group than in the control group. Mean difference in score improvement was 12 percentage points (95% CI 6, 18; p < 0.001).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-EFFICACY:\nHeart failure self-efficacy improved more in the intervention group than in the control group. Mean difference in score improvement was 2 points (95% CI 0.7, 3.1; p = 0.0026).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-CARE BEHAVIOR:\nSignificantly more patients in the intervention group than in the control group reported daily weight measurement at 12 months (79% vs. 29%, p < 0.001).\n\nBODY.RESULTS.SUBGROUP ANALYSES ACCORDING TO LITERACY:\nTwenty-four patients in each group had inadequate literacy based on the S-TOFHLA (Table 2). Among these patients, there was no difference in quality of life score in unadjusted and adjusted analyses (difference = -1.6; 95% CI -15, 12); p = 0.81). For the combined outcome of hospitalization or death, the unadjusted IRR was 0.77 (95% CI 0.30, 1.94). After adjusting for baseline differences, the IRR was 0.39 (95% CI 0.16, 0.91). Seventy-five patients had marginal or adequate literacy based on the S-TOFHLA. We found no difference in quality of life score in unadjusted and adjusted analyses (difference = -4.2; 95% CI -14, 6; p = 0.40). Among patients in the higher literacy group, the unadjusted IRR for hospitalization or death was 0.65 (95% CI 0.33, 1.3). After adjusting for baseline differences, the IRR was 0.56 (95% CI 0.30, 1.04). We did not find a statistically significant effect modification between literacy and the intervention.\n\nBODY.DISCUSSION:\nA heart failure self-management program designed for patients with low literacy reduced the rate of the combined endpoint of hospitalization or death. The prespecified subgroup analyses suggest that patients with low literacy benefited as much from the intervention as the patients with higher literacy. The success of our intervention reflects the goals of our program. We designed an easy-to-read and use educational booklet and self-management plan, and focused on overcoming barriers to learning self-management [15]. Our intervention was founded on teaching self-management. We focused on helping patients understand signs and symptoms of worsening heart failure and perform self-adjustment of diuretics based on weight fluctuation. Many care providers would not attempt to teach patients, particularly those with low literacy, how to self-adjust their diuretic medication. We found that, with careful teaching, many patients incorporated this strategy into their daily routine successfully, as demonstrated by improved self-care behaviors. Teaching self-adjustment of diuretics, rather than the conventional teaching to call the care provider if weight fluctuates, empowers patients to take more control over their illness. Self-adjustment of diuretic dose is a prominent aspect of the self-management training we provided to the intervention patients. Other programs to improve patient self-management have not been explicit in teaching patients to self-adjust their diuretic dose based on weight fluctuation. Although our outcomes are comparable to others', using this approach puts more control into the hands of the patient. Furthermore, our intervention appears effective among patients with low literacy skills, a group often overlooked for empowering interventions. Our study adds to the growing literature on disease management programs for patients with heart failure [6], particularly those that focus on self-management training [7-10]. Studies focusing on self-management training have demonstrated comparable improvements in hospitalization rates to more comprehensive programs that aim to improve the quality of pharmaceutical prescribing, provide home visits, and take place in specialized cardiology clinics [6]. Such comprehensive programs have also been shown to reduce mortality, but self-management programs have not [6]. We did not detect any difference in heart failure related quality of life which was the outcome we powered our study to detect. Other self-management studies that have found improved quality of life have enrolled patients during a heart failure hospitalization [8,9]; however, we enrolled patients in the outpatient setting while they were clinically stable. Improving quality of life for stable outpatients may be more difficult for this type of intervention. We have previously reported that patients with diabetes and low literacy benefited more from a disease management intervention than those with higher literacy skills [26]. A similar result in two different chronic diseases substantiates the claim that appropriately designed disease management programs may have greater effect for low literacy or vulnerable populations, who are most at risk for adverse outcomes with usual care. This finding is particularly important in light of the recent study by DeBusk and colleagues that did not find a reduction in hospitalization with a well-designed comprehensive intervention [13]. The authors and an accompanying editorial [14] suggested that the failure to detect improvement may have occurred because the patients studied were less at-risk than in other studies. They called for more research to determine better ways of targeting disease management. We believe that low literacy is an important marker for vulnerability to adverse outcomes, and that disease management programs targeted to patients with low literacy may be an effective way of focusing resources on those most able to benefit. If patients with low literacy are to be preferentially recruited for such programs, innovative outreach and screening efforts will likely be required, as patients with low literacy may face particular barriers to accessing such care. This study should be interpreted in light of its limitations. Research assistants were not blind to group assignment during the assessment of self-reported outcomes. As such, patients in the intervention may have been more likely to inflate their responses in an effort to please the interviewer. This effect would tend to inflate patient responses to the subjective assessments of heart failure-related quality of life, self-efficacy, and self-care behaviors. The MLHF questionnaire was modified from its original form to make it easier for patients with low literacy to respond. This change in the scale may have changed its ability to detect important changes in heart failure related quality of life. Because the groups' mean scores were almost identical, we do not feel this limitation changed our results. In a similar vein, most questionnaires are not validated in low literacy populations, raising questions as to their ability to perform to the same standards. Our sample size was small, which did not allow for an even distribution of baseline variables among the groups. We controlled for baseline differences between groups in our analysis. While it is controversial whether or not to control for baseline differences in randomized controlled trials, some analysts have argued that doing so improves the power without introducing bias [24]. A larger, multi-site study would offer better control of confounders, better generalizability, and more power to determine differences in effect according to literacy. We did not collect data on the resources needed to implement this type of intervention in usual settings, and such a study and cost-effectiveness analysis would be helpful for most interventions of this type. We used health educators, not nurses or physicians, to deliver the intervention. By using less highly trained individuals to deliver the intervention, we enabled nurses and physicians to perform clinical tasks more commensurate with their training. Other studies that have performed global cost estimates have found that the savings from reductions in hospitalizations exceed the cost of the intervention [6].\n\nBODY.CONCLUSION:\nIn conclusion, our heart failure self-management program, designed for patients of all literacy levels, appears to reduce rates of hospitalization and death. Patients with low literacy, and other vulnerable patients, may stand to benefit most from these programs. Further research into the design, implementation, and dissemination of disease management programs for low literacy patients will be crucial for meeting the health care needs of the growing population of patients with chronic illness.\n\nBODY.COMPETING INTERESTS:\nDrs. DeWalt and Pignone have received honoraria and grants from Pfizer, Inc., Dr. Rothman has received grants from Pfizer, Inc., and Dr. Sueta is in their speakers bureau.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nDD conceived of the study, participated in its design and coordination, performed statistical analyses, interpretation of the data, and drafted the manuscript. RM, MB conceived of the study and participated in its coordination. MK, KC coordinated the study, and collected the data, RR, CS participated in study design and interpretation of the data. MP conceived of the study, participated in its design and coordination, and interpretation of the data. All authors reviewed the manuscript for important intellectual content and gave final approval.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to Standard information about self-care what was the result of Follow-up and thorough education on self-care on Baseline life-quality score?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
401
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Incisional hernia after upper abdominal surgery: a randomised controlled trial of midline versus transverse incision\n\n ABSTRACT.OBJECTIVES:\nTo determine whether a transverse incision is an alternative to a midline incision in terms of incisional hernia incidence, surgical site infection, postoperative pain, hospital stay and cosmetics in cholecystectomy.\n\nABSTRACT.SUMMARY BACKGROUND DATA:\nIncisional hernias after midline incision are commonly underestimated but probably complicate between 2 and 20% of all abdominal wall closures. The midline incision is the preferred incision for surgery of the upper abdomen despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia. A RCT was preformed in the pre-laparoscopic cholecystectomy era the data of which were never published.\n\nABSTRACT.METHODS:\nOne hundred and fifty female patients were randomly allocated to cholecystectomy through midline or transverse incision. Early complications, the duration to discharge and the in-hospital use of analgesics was noted. Patients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications such as fistula, wound dehiscence and incisional hernia after a minimum of 12 months follow-up.\n\nABSTRACT.RESULTS:\nTwo percent (1/60) of patients that had undergone the procedure through a transverse incision presented with an incisional hernia as opposed to 14% (9/63) of patients from the midline incision group (P = 0.017). Transverse incisions were found to be significantly shorter than midline incisions and associated with more pleasing appearance. More patients having undergone a midline incision, reported pain on day one, two and three postoperatively than patients from the transverse group. The use of analgesics did not differ between the two groups.\n\nABSTRACT.CONCLUSIONS:\nIn light of our results a transverse incision should, if possible, be considered as the preferred incision in acute and elective surgery of the upper abdomen when laparoscopic surgery is not an option.\n\nBODY.INTRODUCTION:\nThe rate of incisional hernia after midline incision is commonly underestimated but probably lies between 2 and 20% [1–5]. Thus, incisional hernia is a major postoperative problem. The treatment of incisional hernia is complicated by high rates of recurrences. Recently, in a randomised controlled trial published by Burger et al. [6], midline incisional hernia repair has been shown to be associated with a 10-year cumulative recurrence rate of 63 and 32% for suture and mesh repair, respectively. The midline incision is the preferred incision for surgery of the upper abdomen, despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia [7]. Various approaches to opening the abdomen have been advocated over time. The choice for a certain incision is dependent on the exposure necessary for the desired procedure to succeed. A midline incision, be it supraumbilical, infraumbilical or both, is an approach especially suited for emergency and exploratory surgery because of the quick and generous exposure that can be achieved within a few minutes [8, 9]. The avascular nature of the linea alba minimises blood loss during this procedure. A supraumbilical transverse incision may be utilised in case exposure of the upper abdomen is desired. During this incision, the damage inflicted to the segmental arteries and nerves is previously described as being minimal [10]. Previously, only one randomised controlled trial, comparing transverse and true midline incisions, has been published specifically addressing incisional hernia incidence [11]. To determine whether the use of a transverse incision is an alternative to a midline incision for open cholecystectomy in terms of incisional hernia incidence, surgical site infection, postoperative pain and hospital stay, this randomised controlled trial was performed. This trial was conducted in an era when laparoscopic cholecystectomy was not yet available. The possibility of low incisional hernia rates after transverse incisions and the fact that little is known about potential advantages incited us to publish the relevant results of this randomised controlled trial which has been performed in the past and has only been reported in a Dutch thesis by one of the authors (H.L.). The primary endpoint of this study was the incisional hernia incidence after 12 months of follow-up. Secondary endpoints included pain and cosmetic appearance.\n\nBODY.METHODS.PROTOCOL:\nSome 150 consecutive female patients were randomly assigned to a midline or transverse incision as an approach for elective cholecystectomy or combined cholecystectomy and cholangiography (with or without consecutive choledochotomy) (75 and 75 patients, respectively). Emergency procedures were excluded from participation. The sample size is based on an incisional hernia rate reduction from 20 to 6% at a power of 80% and an error rate of 5%. Obtaining informed consent was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975. The investigation reported was performed with informed consent from all of the patients and followed the guidelines for experimental investigation with human subjects and was approved by the medical ethics committee. An independent statistician prepared closed, tamper-proof envelopes containing the random allocation (Fig. 1). Patients were randomised for one of the procedures in theatre through the opening of the envelopes.Fig. 1Flow chart of patient inclusion and follow-up Patient-related factors that were recorded were age, body mass and length and date of operation. Operation-related factors that were recorded were the exact nature of the operation, length of the incision, the thickness of the subcutaneous fat, surgeon performing the procedure, as well as the duration of the operation (skin-to-skin time). In the immediate postoperative period, the use, dose and type of analgesics was recorded and a pain score was administered. The use of analgesics (morphine 7.5 mg intra-muscular injection, 4 h minimum interval between consecutive injections) was monitored for 48 h after surgery; the pain score was administered for the first 6 days after surgery. In patients assigned to surgery through a midline incision, the skin was incised from just below the xyphoid process to just above the umbilicus. The abdominal wall was opened in the midline by incising the linea alba. A Collin type (two-bladed) self-retaining retractor was used to maintain exposure. The abdominal wall was closed in one layer using single polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). The skin was consequently closed using running monofilament nylon sutures (Ethilon; Ethicon, Amersfoort, The Netherlands). Patients randomised for a transverse incision received a right-sided unilateral transverse incision between 3 and 4 cm below the costal margin. The rectus muscle was incised. The fibres of the external and internal obliques and the transverse muscles were separated in the direction of their course. Exposure was achieved through the use of a manually held single-bladed retractor. Closure of the abdominal wall was achieved by closure of the peritoneum and the posterior rectus fascia using a continuous, polygalactin 910 suture (Vicryl; Ethicon, Amersfoort, The Netherlands). The anterior rectus sheath and the fascia of the internal and external transverses were closed using simple interrupted polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). Towards the end of both procedures, a Redon low-vacuum drain catheter was placed, which was guided outside the abdominal cavity approximately 5 cm from the incision. The skin was consequently closed using continuous monofilament nylon suture (Ethilon; Ethicon, Amersfoort, The Netherlands). All patients received a dose of 5,000 IU of sodium–heparin on the morning of the procedure as thrombosis prophylaxis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Pearson χ2 test was used for comparing percentages. In case of small expected numbers, a Fisher's exact test was performed. Continuous variables were analysed using the Mann–Whitney test. A P-value of 0.05 or less (two-sided) was considered to be statistically significant. Means and medians are expressed ±standard deviation (SD).\n\nBODY.METHODS.FOLLOW-UP:\nPatients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications, such as fistula, wound dehiscence and incisional hernia, after a minimum of 12 months follow-up. The patient and the surgeon evaluated the cosmetic results independently and were asked to rate the scar as unsatisfactory, satisfactory or fine. Furthermore, the length and width of the scar was measured.\n\nBODY.RESULTS.STUDY GROUP:\nSome 150 consecutive patients were randomised for participation in this study during an inclusion period from April 1977 until July 1979. Seventy-five patients received a transverse incision and 75 patients a midline incision (Fig. 1). One patient was withdrawn from further follow-up after developing peritonitis and consequent acute respiratory distress syndrome (ARDS) not related to the closure of the abdominal wall 2 days after surgery (transverse incision group). The patients' average age was 51.9 and 51.4 years for the midline and the transverse incision groups, respectively. Furthermore, no differences were found in the body mass and average length between the two groups (Table 1). A cholecystectomy was performed using a transverse incision in 52 patients and utilising a midline incision in 52 patients also. Fifteen and 16 patients, respectively, underwent a combined cholangiography/cholecystectomy. A further 7 and 6 patients, respectively, were treated with a cholangiography/cholecystectomy plus additional choledochotomy and the postexploratory placement of a T-tube.Table 1Baseline characteristics of the patients undergoing surgery, according to study groupVariableMidline incisionTransverse incisionn = 75n = 74Average age (years) ± SD51.9 ± 14.851.4 ± 13.8Average weight (kg) ± SD71.3 ± 14.568 ± 14.3Average length (cm) ± SD163.5 ± 7.8164 ± 7.3\n\nBODY.RESULTS.SURGEON:\nStaff surgeons performed 17% (13/75 patients) of all procedures performed through a midline incision. The remainder of the procedures through a midline incision was carried out under staff surgeon supervision. Staff surgeons performed 14% of all procedures in the transverse incisions study group (10/74 patients) and supervised the remainder. No statistically significant difference was found between the two randomised groups (P = 0.65).\n\nBODY.RESULTS.DURATION OF SURGERY:\nNo significant difference was noted in the skin-to-skin time (in min) for the two different incisions (Table 2). Surgery utilising midline and transverse incision took 56.9 ± 29.3 and 53.2 ± 26.8 min, respectively (P = 0.35). The total duration of the procedures until extubation (in min) did not differ between the midline and transverse incisions (71.0 ± 30.5 and 67.0 ± 27.3, respectively, P = 0.34).Table 2Length of incision, thickness of subcutaneous fat and skin-to-skin time, according to study groupVariableMidline incisionTransverse incisionP-valueLength of incision (mm) ± SDa164 ± 28140 ± 24<0.0001Thickness of subcutaneous fat (mm) ± SDa34.5 ± 13.030.3 ± 12.40.05Skin-to-skin time (min) ± SDa56.9 ± 29.353.2 ± 26.80.40Width of scar (mm) ± SDb8.3 ± 1.43.3 ± 1.2<0.0001aMeasured during surgery in 75 midline and 74 transverse incisionsbMeasured at follow-up in 63 and 60 midline and transverse incisions, respectively\n\nBODY.RESULTS.PAIN AND ANALGESICS:\nSignificantly more patients, having undergone a midline incision, reported pain on day one, two and three postoperatively (P < 0.0001, Table 3). In the midline incision group, 28/75 patients required no or only one dose of analgesics; the remainder required two or more doses. Thirty-one patients operated through a transverse incision required no analgesics or only one dose; 43 patients (the remainder) required two or more. No significant difference in the use of analgesics was found between the groups (P = 0.69).Table 3Postoperatively reported pain, according to study group, shown as the number of patients reporting pain at the time points indicated (percentage), with the remainder of patients reporting no painTime point after surgeryMidline incision n = 75Transverse incision n = 74P-valuePatients reporting pain, n (%)Patients reporting pain, n (%)3–4 h68 (91)60 (81)0.09First day64 (85)39 (53)<0.0001Second day57 (76)23 (31)<0.0001Third day28 (37)9 (12)<0.0001Fourth day5 (7)3 (4)0.72Fifth day0 (0)1 (1)0.50Sixth day0 (0)1 (1)0.50\n\nBODY.RESULTS.COMPLICATIONS:\nPostoperative complications (Table 4) were seen in 16 out of 75 patients (21%) from the midline incision group and in 15% from the transverse incision group (11 patients) (P = 0.30). Briefly, one patient in each group developed cardiac complications; 8 and 6 patients developed urinary retention after the midline and transverse incisions, respectively (P = 0.59). Surgical site infections were diagnosed in 7 and 3 patients, respectively (P = 0.33).Table 4Rate of complications after surgery, according to study group, shown as the number of patients diagnosed with complications (percentage)ComplicationMidline incisionTransverse incisionP-valuen = 75 n (%)n = 75 n (%)Cardiac1 (1)1 (1)1Urinary retention8 (12)6 (8)0.59ARDS01 (1)0.50Surgical site infection7 (9)3 (4)0.33Haemorrhage1 (1)00.50Pneumonia01 (1)0.50Total17 (23)12 (16)0.30\n\nBODY.RESULTS.DISCHARGE:\nForty-five (60%) and 42 (57%) patients from the patients having undergone a midline or a transverse incision, respectively, were discharged on day 6 or 7 postoperatively. The remaining patients from each group left hospital care on day 8 or later. The duration of hospital admission did not differ between the two types of incision (P = 0.74).\n\nBODY.RESULTS.COSMETICS:\nThe width and length of all incisions was measured during the follow-up visit (Table 2). The mean width of the scar after the healing of the midline incisions was found to be 8.3 ± 1.4 mm. The mean width of the scar after the healing of the transverse incisions was measured to be 3.3 ± 1.2 mm. This observed difference is significant (P < 0.0001). The length of the incisions was 140 ± 24 mm and 164 ± 28 mm for the transverse and the midline incisions, respectively. The difference in scar length was found to be significant (P < 0.0001).\n\nBODY.FOLLOW-UP:\nEighty-one percent of all patients operated through a transverse incision were seen during the follow-up examination (n = 60). Of the patients operated through a midline incision, 63 out of 75 were seen at the outpatient clinic (84%). The patients that were lost to follow-up could either not be traced or had deceased (Fig. 1). The minimum follow-up for the evaluation of cosmetic results and hernia incidence was 12 months and the maximum was 36 months.\n\nBODY.FOLLOW-UP.INCISIONAL HERNIA:\nFrom the patients that had undergone the procedure through a transverse incision, one (1/60; 2%) presented with an incisional hernia as opposed to 9 patients from the midline incision group (9/63; 14%); 95% confidence interval (CI) 7.5–25.4%. This difference in hernia incidence is significant (P = 0.017). No significant correlation was found between the incisional hernia rate and surgical site infection (P = 0.07).\n\nBODY.FOLLOW-UP.SUBJECTIVE COSMETICS:\nPatients and surgeons alike were asked to rate the appearance of the scar during the postoperative follow-up outpatient clinic visit. Both the surgeons and the patients found the scar resulting from the transverse incision to be more cosmetically pleasing (P < 0.0001 and P = 0.03, respectively, Table 5).Table 5Number of patients and surgeons rating the cosmetics of a scar at follow-upScoreMidline incision (n = 63)Transverse incision (n = 60)Patients, n (%)Surgeons, n (%)Patients, n (%)Surgeons, n (%)Unsatisfactory6 (10)25 (40)2 (3)6 (10)Satisfactory16 (25)27 (43)9 (15)12 (20)Fine41 (65)11 (17)49 (82)42 (70)Total63636060Difference between type of incision: patients P = 0.03; surgeons P < 0.0001\n\nBODY.DISCUSSION:\nThis prospective randomised study of transverse and midline incisions for open cholecystectomy shows that a significant reduction of incisional hernia incidence can be achieved through the use of a transverse incision. Only one other study (published in 1980) reported the incidence of incisional hernia after upper abdominal midline and unilateral transverse incision in a randomised trial. No difference between the two techniques (8 and 6% incisional hernia, respectively) was found, but the relatively short follow-up of 6 months, however, may be held accountable for this finding [11]. Three retrospective studies showed rates of incisional hernia of 3.2, 5.4 and 16.5% for midline incision and 1.3, 6.7 and 13.4% for transverse incision without statistically significant differences [12–14]. The possible reason for the rather high incidence of incisional hernia in the midline incision group (14%) may lie in the use of resorbable 910 polygalactin sutures. Nevertheless, the use of the same type of resorbable suture in the closure of the transverse incisions resulted in a 2% hernia rate. There is evidence for the importance of proper technique and choice of incision as a means to reduce incisional hernia being more important than the use of suture material [7]. Furthermore, as mentioned above, it is known that the incidence of incisional hernia in the case of a midline incision lies between 2 and 20%. From our data, the NNT (numbers needed to treat) is calculated to be 8 (95% CI 5–30) and the RRR (relative risk reduction) is 88% (95% CI 23–100%). Luijendijk et al. [15] have published a hernia rate of 2% after Pfannenstiel incisions closed using 910 polygalactin, which is in agreement with our findings in the patients randomised for a transverse incision, emphasising the importance of the incision over the choice of suture material. In our study, significantly fewer patients reported pain on day 1, 2 and 3 after transverse incisions, a result that was also described by other authors [16, 17]. Greenall et al. [18] published a contradictory report (in 1980) in which no significant difference in postoperative pain was found between midline and transverse incisions. The previously mentioned study, however, only analysed 46 out of 572 patients (8%) with regard to pain, which may explain the finding. In the same way, Lacy et al. suspended visual analogue pain scoring in a study comparing midline and transverse incisions for abdominal aortic surgery. Remarkably, the two groups in our study did not differ in terms of postoperative analgesia, a finding that is also reported by Lindgren et al. [17] and Donati et al. [19]. In our study, surgeons as well as patients were significantly more satisfied with the aesthetic appearance after a transverse in comparison with a midline incision. The scars after transverse incisions were found to be significantly shorter and less wide than the midline incisions, which may account for the observed difference. A possible reason for this is that a transverse incision is executed parallel to the prevailing direction of the skin lines on the abdomen and, therefore, the tension on the wound and consequent scar is low. Cholecystectomy has come a long way since this trial. The introduction and widespread acceptance of laparoscopic technique as the treatment of choice has rendered open cholecystectomy to be an operation for exceptional, and perhaps surgically difficult, circumstances. Nowadays, the study reported is hardly feasible, yet, the results are still applicable and very relevant for other surgical procedures in the (upper) abdomen. Knowledge of the favourable results of a transverse incision may aid surgeons in their choice when finding themselves in the unfortunate position of needing conversion to open cholecystectomy. In conclusion, this investigation on transverse incisions might be helpful in reducing the incidence of incisional hernia in patients after open cholecystectomy. The midline incision is a preferred manner to achieve exposure of the abdominal cavity and is considered to be easily performed and quick. Although the midline incision is generally accepted, the incidence of incisional hernias is surprisingly high [1–5]. The choice for a particular incision should not only be based on exposure, but also on hernia incidence reduction, especially since recurrence rates after hernia repair are reported to be very high. Furthermore, the recurrence rate after incisional hernia repair is a disappointing 63 and 32% for suture and mesh repair, respectively [6]. In the light of these results, incisional hernia prevention is warranted. In this investigation, it is shown that a significant reduction (from 14.5 to 1.7%) of incisional hernia incidence was achieved by using a transverse incision. Hence, a transverse incision should be considered as the preferred incision in acute and elective surgery of the upper abdomen in which laparoscopic surgery is not an option. Full exposure of two quadrants is feasible through the use of a unilateral transverse incision in, for example, biliary, bariatric, liver and colonic surgery. The transverse incision should be part of the abdominal surgeon's armamentarium and is a preferable incision to prevent the high incidence of incisional hernia after abdominal surgery.\n\n**Question:** Compared to Midline incision what was the result of Transverse incision on Cardiac complications?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
310
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 320/9 μg on Time to discontinuation?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
355
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 160/9 μg on Breathlessness Cough and Sputum Scale improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
427
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Six-week high-intensity exercise program for middle-aged patients with knee osteoarthritis: a randomized controlled trial [ISRCTN20244858]\n\n ABSTRACT.BACKGROUND:\nStudies on exercise in knee osteoarthritis (OA) have focused on elderly subjects. Subjects in this study were middle-aged with symptomatic and definite radiographic knee osteoarthritis. The aim was to test the effects of a short-term, high-intensity exercise program on self-reported pain, function and quality of life.\n\nABSTRACT.METHODS:\nPatients aged 36–65, with OA grade III (Kellgren & Lawrence) were recruited. They had been referred for radiographic examination due to knee pain and had no history of major knee injury. They were randomized to a twice weekly supervised one hour exercise intervention for six weeks, or to a non-intervention control group. Exercise was performed at ≥ 60% of maximum heart rate (HR max). The primary outcome measure was the Knee injury and Osteoarthritis Outcome Score (KOOS). Follow-up occurred at 6 weeks and 6 months.\n\nABSTRACT.RESULTS:\nSixty-one subjects (mean age 56 (SD 6), 51 % women, mean BMI 29.5 (SD 4.8)) were randomly assigned to intervention (n = 30) or control group (n = 31). No significant differences in the KOOS subscales assessing pain, other symptoms, or function in daily life or in sport and recreation were seen at any time point between exercisers and controls. In the exercise group, an improvement was seen at 6 weeks in the KOOS subscale quality of life compared to the control group (mean change 4.0 vs. -0.7, p = 0.05). The difference between groups was still persistent at 6 months (p = 0.02).\n\nABSTRACT.CONCLUSION:\nA six-week high-intensive exercise program had no effect on pain or function in middle-aged patients with moderate to severe radiographic knee OA. Some effect was seen on quality of life in the exercise group compared to the control group.\n\nBODY.BACKGROUND:\nExercise is considered to be one of the most important treatments for patients with mild to moderate knee osteoarthritis [1,2]. Positive effects on pain and function, as well as cost-effectiveness have been reported [3,4]. The effect size obtained on pain experience is similar to that of pharmacological treatment [3,5]. The side effects have also been reported to be favorable, including reduced risk of inactivity-related disorders, such as cardiovascular disease and diabetes [2,6,7]. Moderate levels of physical activity are not associated with radiographic progression, but activities involving a risk of severe knee injury are closely related to increased risk of developing radiographic knee osteoarthritis [8-12]. The dose-response relationship of exercise on symptoms and function is not clear and exercise recommendations in osteoarthritis guidelines are based mostly on studies on elderly people, i.e. mean age ≥ 65 [3,13-15]. It is not clear whether exercise has a similar effect on pain and function in middle-aged patients compared with elderly patients. The aim of this study was to examine the effects of a short-term, high-intensity exercise program in middle-aged subjects (age 36–65) with definite radiographic knee osteoarthritis on self-reported pain, function, and quality of life.\n\nBODY.METHODS.SUBJECTS:\nA flow chart of the recruitment process is given in Figure 1. Radiologists and orthopedic surgeons at the Halmstad County Hospital, in the south-west of Sweden, and general practitioners within the catchments area of this hospital, were informed about the study and asked to list patients with radiographic knee osteoarthritis on a \"patients eligible for research\" list. Between October 1998 and October 2001 121 patients, referred by their general practitioner for radiographic examination because of long standing knee pain, were listed. Ninety-seven fulfilled the inclusion criteria: age 35–65, living in the defined geographic area, and diagnosis of radiographic osteoarthritis of Kellgren and Lawrence grade III or more, i.e. definite osteophytes and joint space narrowing. All listed patients received written information about the study. One week after the information was sent, patients were contacted by telephone, and invited to participate in the study. Twenty-eight patients declined participation for various reasons, the most common reason being lack of time and interest. To ensure only patients with symptoms due to knee osteoarthritis and eligible for exercise intervention were included, the following exclusion criteria were used: inflammatory joint disease, anterior cruciate ligament injury, known symptomatic injury to the menisci, hip symptoms more aggravating than the knee symptoms, about to have knee replacement surgery within 6 months, and co-morbidities not allowing exercise (Figure 1). Figure 1Flowchart of recruitment process and included patients. When eight or more patients fulfilled the inclusion criteria they were invited to baseline interview and examination for determination of exclusion criteria. Randomization was performed after the baseline examination. All patients were informed that they could be randomly allocated to either the exercise or the control group. After written informed consent, sixty-five subjects were randomized. Randomization was performed by the patient drawing a sealed envelope containing a piece of folded paper with either the word \"exercise\" or \"control\" written on it. Four persons were falsely randomized (one was too old at inclusion, one had severe hip osteoarthritis, one had fibromyalgia, and one had only joint space narrowing and no significant osteophytes), and thus 61 subjects entered the study. Thirty persons were allocated to the exercise group and 31 to the control group. Patients in the control group were offered exercise classes after the six-month follow-up period.\n\nBODY.METHODS.SUBJECTS.EXERCISE GROUP:\nThe number of participants exercising together varied from two to nine. There were eight intervention groups in all. One-hour exercise sessions, twice a week for six weeks, were supervised by a physical therapist (CT). The program consisted of weight-bearing exercises aimed at increasing postural control and endurance and strength in the lower extremity (see additional file 1 for the complete exercise program). Exercises were performed at five stations at submaximal intensity (minimum 60% of maximum heart rate (HRmax)). Intensity was gradually and individually increased during the six weeks by increased lever arms or range of motion. Patients were encouraged to exercise at their most vigorous intensity possible, without losing quality in performance or severely exacerbating pain. Pain during exercise was not considered as an obstacle if the patient perceived it as \"acceptable\" and no increased symptoms were persistent after 24 hours [16]. If pain exceeded this level, exercise intensity was reduced occasionally, until the \"acceptable\" level was found. On every occasion, the heart rate of two random participants was estimated at each station using Polar pulsimeters (Polar® Accurex Plus, Polar, Sweden). The other patients had their heart rate measured by the physical therapist or themselves, palpating their carotid arteries. Notes were taken by the supervising physical therapist, on every occasion and on all patients, about exercise intensity, heart rate, and perceived exertion according to Borg's Rate of Perceived Exertion scale (RPE) at each station [17]. These data were used to give the physical therapist a view of exercise intensity and to assure the preservation or increase of intensity from time to time. Patients were encouraged to keep up and increase intensity whenever possible throughout the six weeks. Patients received a thera band to perform daily pulley exercises at home. In addition, three exercises, which were considered as the most challenging to the individual, were chosen as daily home exercises. Patients were recommended to perform some kind of weight bearing submaximal activity, such as walking or their home exercises, for at least 30 minutes or two times 15 minutes every day.\n\nBODY.METHODS.SUBJECTS.CONTROL GROUP:\nThe controls were told not to make any lifestyle changes. They met the physical therapist (CT) for one hour at three times; baseline, follow up at 6 weeks and 6 months. After six months they were offered exercise classes or instructions and a home-exercise program.\n\nBODY.METHODS.OUTCOME MEASURES.PRIMARY OUTCOME:\nThe primary outcome measure was the disease-specific Knee injury and Osteoarthritis Outcome Score (KOOS) [18,19]. The KOOS assesses the patients' self-report of pain, other symptoms, activities of daily living, sport and recreation function, and knee-related quality of life, in 42 questions which take about 10 minutes to complete. The KOOS is scored from 0 to 100, separately for each subscale, 0 indicating extreme problems and 100 indicating no problems. A change of 10 points or more is considered a clinically significant change [20]. The questionnaire and scoring manual can be found at . The Western Ontario and McMaster Osteoarthritis Index (WOMAC) [21] is included in the KOOS, and WOMAC scores can also be calculated.\n\nBODY.METHODS.OUTCOME MEASURES.SECONDARY OUTCOME:\nSecondary outcome measures were the Short Form-36 item (SF-36), ergometer test, and five tests of functional performance. The SF-36 is a generic, widely used measure of general health status, which comprises eight subscales: Physical Functioning (PF), Role-Physical (RP), Bodily Pain (BP), General Health (GH), Vitality (VT), Social Functioning (SF), Role-Emotional (RE) and Mental Health (MH) [22]. The SF-36 is self-explanatory and takes about 10 minutes to complete. The SF-36 is scored from 0 to 100, 0 indicating extreme problems and 100 indicating no problems. The subscales assessing mainly physical components (PF, RP, BP, GH) were summarized to a physical component summary score (PCS), and the mental subscales (VT, SF, RE, MH) to a mental component summary score (MCS) [23]. Values are norm-based scored, using the U.S. general population norms from 1998. Each scale has the mean of 50 and standard deviation of 10. Scale score below 50 indicates a health status below average, and a score above 50 indicates a health status above average. Questionnaires were distributed prior to randomization at baseline, after 6 weeks, and 6 months. A bicycle ergometer test and five tests of functional performance were assessed (Figure 2). Figure 2Tests of functional performance. A) Åstrand's cycle-ergometer test [24]. B) Rising on one leg from sitting on lowest possible height [25,26]. C) One-leg hop [25], [27]. D) Lateral step-up [28]. E) One-leg semi squatting; maximum number during 30 sec. [26]. F) Heel-rising on one leg; maximum number during 20 sec. [26], [29]. 1. Åstrand's bicycle-ergometer test [24] (Fig 2A). 2. Rising on one leg, from sitting on lowest possible height [25,26] (Fig 2B). 3. One-leg hop [25,27] (Fig 2C). 4. Lateral step-up [28] (Fig 2D). 5. One-leg semi squatting; maximum number during 30 seconds [26] (Fig 2E). 6. Heel-raising on one leg; maximum number during 20 seconds [26,29] (Fig 2F). Tests of functional performance were recorded on three occasions: prior to randomization at baseline, after 6 weeks, and at 6 months. To assess compliance the number of exercise occasions attended was noted.\n\nBODY.METHODS.STATISTICS:\nPost-hoc, a power analysis was performed to estimate the number of patients needed to show a clinically significant difference between groups. Estimating the least clinical significant difference to be 11 ± 15 KOOS points, a total of 30 subjects in each group were needed to detect a difference with 80% power, p = 0.05. Data were analyzed using nonparametric tests. P-values of less than or equal to 0.05 were considered to be significant, and all tests were two-tailed. To compare groups, Mann-Whitney U-test was used. Friedman's test was used for repeated measures analysis of variance. Six weeks was considered as the time-point of primary interest, and 6 months as follow-up. Wilcoxon signed rank test was performed to compare changes from baseline to six weeks and 6 months respectively. Analyses were performed using SPSS 12.0.1 for Windows [30]. The study was approved by the Research Ethics Committee at Lund University, Sweden (LU 99–98), and is in compliance with the Helsinki Declaration.\n\nBODY.RESULTS.SUBJECTS:\nThe mean age of the 61 included subjects was 56 ± 6 years, and the mean BMI was 29.5 ± 4.8 kg/m2. Patient characteristics are shown in table 1. Twenty-eight patients in each group were available for follow-up. The reasons for dropout were lack of time, reorganization at work, sudden illness, and increased knee symptoms (Figure 1). There were no clinically significant differences in baseline characteristics between the groups. Patient characteristics are shown in table 1. Table 1 Patient characteristics at baseline Exercise group n = 30 Control group n = 31 p-value Age (years) mean ± SD (range) 54.8 ± 7.1 (36–64) 57.3 ± 4.7 (46–65) 0.16 Gender number (%) women 15 (50%) 16 (52%) 0.90 BMI (kg/m) 2 mean ± SD 29.6 ± 4.5 29.5 ± 5.1 0.78 Aerobic capacity (ml O 2 /kg x min) mean ± SD 25.9 ± 6.4 25.2 ± 4.9 0.66 KOOS* pain 60 ± 18 64 ± 19 0.38 KOOS* symptoms 63 ± 20 66 ± 18 0.67 KOOS* ADL 69 ± 18 71 ± 21 0.76 KOOS* sport & recreation 34 ± 31 37 ± 29 0.54 KOOS* QOL 40 ± 15 46 ± 21 0.31 Knee Injury and Osteoarthritis Outcome Score [ 18 , 19 ] Score from 0–100, worst to best. \n\nBODY.RESULTS.COMPLIANCE:\nThe total number of performed supervised exercise sessions by the 28 patients available for follow-up in the intervention group was 302/336 (89.9%). Patients participated on average in 11 out of 12 possible exercise classes (12 classes (n = 11), 11 (n = 9), 10 (n = 6), 9 (n = 1), 2 (n = 1)). The most common reason for absence was illness not related to knee osteoarthritis, and work-related lack of time.\n\nBODY.RESULTS.BETWEEN-GROUP DIFFERENCES:\nThere was no difference between groups in pain or self-estimated function at either 6 week or 6 month follow-up. Quality of life improved significantly in the exercise group compared to the control group at 6 weeks (4.0 vs. -0.7, p = 0.05) and the results persisted at 6 months (5.1 vs. -2.3, p = 0.02, Table 2) Table 2 Comparisons of change in Knee injury and Osteoarthritis Outcome Score (KOOS) ‡ [ 18 , 19 ] subscales between exercise and control group KOOS Subscales Exercise group Control group Change in KOOS score* 95 % CI Change in KOOS score* 95 % CI p † Pain 6 w 1.8 -3.2 – 6.8 -0.3 -6.2 – 5.7 0.49 6 m 3.1 -1.9 – 8.2 -1.1 -6.6 – 4.4 0.32 Symptom 6 w 0.2 -5.1 – 5.6 -3.8 -7.7 – 0.0 0.07 6 m 1.0 -3.8 – 5.8 -3.4 -8.8 – 1.9 0.31 ADL 6 w 2.0 -2.3 – 6.3 -0.6 -7.0 – 5.8 0.96 6 m 0.9 -3.8 – 5.6 -1.9 -7.7 – 3.9 0.61 SportRec 6 w 1.2 -7.9 – 10.4 -4.4 -12.6 – 3.7 0.22 6 m 0.5 -10.1 – 11.2 -8.3 -19.5 – 2.8 0.32 QOL 6 w 4.0 -0.4 – 8.5 -0.7 -5.6 – 4.3 0.05 6 m 5.1 -0.7 – 11.0 -2.3 -9.5 – 4.9 0.02 * negative = worsening, positive = improvement † p-value for between group differences in change over time ‡ [ 18 , 19 ] The individual differences ranged from clinically significant improvement of at least 10 points to clinically significant deterioration in all KOOS subscales and in both the exercise and control group (Figure 3). Figure 3Change in KOOS pain score at six weeks. The individual change in KOOS pain at 6 weeks compared to baseline ranged from improvement to worsening, in both exercise and control group. A change of 10 points or more is considered clinically significant [20]. A similar pattern was seen for all KOOS and SF-36 subscales.\n\nBODY.RESULTS.BETWEEN-GROUP DIFFERENCES.SECONDARY OUTCOMES:\nA significant improvement was found in the exercise group compared to the control group at six weeks with regard to the SF-36 Mental Component Summary scale (MCS) (2.1 vs -1.6, p = 0.04). At six months follow-up this difference was no longer persistent (Table 3). Table 3 Comparisons of change in SF-36 Physical and Mental Components Summaries (PCS and MCS) [ 23 ] between exercise and control group. Short Form-36 item (SF-36) [22] Exercise group Control group mean 95%CI mean 95%CI p* Physical Component Summary (PCS) Baseline 42.5 24.4 – 57.5 43.8 24.2 – 57.3 0.49 Change at 6 weeks 3.0 -5.9 – 13.4 0.3 -15.2 – 12.6 0.13 Change at 6 months 3.0 -5.9 – 16.3 -0.7 -14.8 – 9.8 0.09 Mental Component Summary (MCS) Baseline 55.6 40.2 – 66.2 56.3 37.0 – 67.0 0.63 Change at 6 weeks 1.6 -10.6 – 15.0 -2.1 -16.9 – 11.5 0.04 Change at 6 months 0.7 -18.1 – 13.2 -0.7 -16.8 – 12.8 0.40 * Comparison between groups Improvements in functional performance of 0–20 % were seen in both groups at six weeks and six months. There was no difference in improvement between exercisers and controls (p = 0.08–0.9). See additional file 2 for the change in functional performance.\n\nBODY.DISCUSSION.MAIN MESSAGE:\nSix weeks of intensive exercise had no effect on self-reported pain or function in middle-aged patients with symptomatic and moderate-severe radiographic knee osteoarthritis.\n\nBODY.DISCUSSION.COMPARISONS WITH OTHER STUDIES:\nQuite opposite to previously published studies on exercise in knee osteoarthritis we found no improvement in pain or function. Possible reasons for this include our study group having moderate to severe osteoarthritis compared with mild to moderate in most previous studies, being younger than previously studied groups and the intervention being of comparably high intensity. It has been suggested that the responsiveness to exercise is modified by the loss of joint space width [31]. The homogeneity of this study population, with regard to radiographic changes, provided us the possibility to study the effects of exercise on patients with moderate to severe radiographic knee osteoarthritis. Can significant improvements of pain or self-reported function be expected in patients with radiographic knee osteoarthritis corresponding to Kellgren & Lawrence grade 3 or more? In this study, no improvements were seen on group level in pain or function. However, regular exercise in general is important to prevent diseases caused by inactivity [6], and thus patients with knee osteoarthritis should be encouraged to exercise. In clinical practice, patients with severe knee osteoarthritis should have treatments based on individual preferences and different stages of motivation [32]. It can be argued that the exercise intensity was too high for this group with moderate to severe knee osteoarthritis. Even though the intensity of each exercise was individually adapted, all individuals exercised at a minimum of 60% of HR max. It has been suggested that pain during exercise might be a protective mechanism in knee osteoarthritis, i.e. an increase in pain from too intensive exercises may restrain patients from further joint loading, which otherwise could cause further cartilage damage [33]. Patients in the current study were told to reduce the exercise intensity if pain during exercise was perceived as worse than 'acceptable', or persisted more than 24 hours. It is suggested that the different degrees of varus-valgus laxity should be taken into account in exercise interventions, to enhance the functional outcome [34,35]. Severe knee osteoarthritis is associated with a hip-knee-ankle malalignment and an increase in varus-valgus laxity compared to healthy knees [36]. It is possible that varus-valgus laxity mediated the effect of exercise on pain since all patients had radiographic changes corresponding to Kellgren and Lawrence grade III or more. Malalignment may cause increased joint loads, and greater quadriceps strength might further increase joint load by the muscles compressing the articular surfaces [37]. Younger patients are usually more physically active than elderly [38], and have higher demands on level of physical function and physical performance at work or leisure time. Thus, moderate to severe knee osteoarthritis might be perceived as more disabling by younger individuals compared to elderly. Our study population was younger (<65 years) and comprised more men (49%) than most other populations with knee osteoarthritis described in randomized controlled trials of exercise [13-15,39,40], which might have reduced the effect on self-reported function in the present study. This study showed no significant differences on self-reported pain and function either between or within groups. A post-hoc analysis was performed to study the possibility that the benefit from exercise was larger in subjects with worse pain at baseline. Fifteen patients in the exercise group were compared to 13 from the control group who had worse than total group median pain score (KOOS Pain 58 on a 0–100, worst to best scale) at baseline. The groups had comparable patient characteristics. The changes seen in these subgroups were however not different from the changes seen in the total groups. A possible limitation could be lack of power. A post-hoc analysis was performed to estimate the number of patients needed to show a clinically significant difference of 11 ± 15 KOOS-points [20]. The standard deviation of 15 is supported by results from randomized controlled trials of glucosamine supplementation [41] and a nutritional supplement [42] for knee osteoarthritis, where significant group differences were found in KOOS pain and ADL subscales. The number of subjects in each treatment arm in these RCT:s ranged from 15 to 27. The standard deviations in KOOS subscales have not previously been determined in exercise interventions. Only one of the five KOOS subscales showed a statistical significant improvement, and it can not be excluded that this result could be due to chance. The improvement of the KOOS subscale Quality Of Life in the exercise group was of doubtful clinical significance, however the improvement persisted over time, and is in accordance with previous findings of impact from exercise on mental health aspects in patients with knee osteoarthritis [31,43,44]. Group dynamics, support, or attention received may possibly have influenced the quality of life more than the exercise itself in the present study. Psychosocial factors are important determinants of physical function [45], and our results suggest that supervised exercises and follow-up are important, and that quality of life should be evaluated in osteoarthritis interventions.\n\nBODY.CONCLUSION:\nA six-week high-intensive exercise program had no effect on pain or function in middle-aged patients with moderate to severe radiographic knee OA. Some effect was seen on quality of life in the exercise group compared to the control group.\n\nBODY.COMPETING INTERESTS:\nThe author(s) declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nCT participated in design, exercise intervention, assessments and follow-ups, statistical analyses and writing. ER participated in design, analysis and interpretation of the data, and critically revised the article. IP participated in design and interpretation of the data and critically revised the article. CE initiated and obtained necessary permissions for the study, arranged the initial funding, and participated in the analyses, interpretation and revision of the manuscript. All authors read and approved the final manuscript.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here: \n\nBODY.SUPPLEMENTARY MATERIAL:\nAdditional File 1Intensive exercise program CarinaClick here for file Additional File 2Tests of functional performance CarinaClick here for file\n\n**Question:** Compared to No exercise what was the result of Exercise on Functional performance improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
391
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Incisional hernia after upper abdominal surgery: a randomised controlled trial of midline versus transverse incision\n\n ABSTRACT.OBJECTIVES:\nTo determine whether a transverse incision is an alternative to a midline incision in terms of incisional hernia incidence, surgical site infection, postoperative pain, hospital stay and cosmetics in cholecystectomy.\n\nABSTRACT.SUMMARY BACKGROUND DATA:\nIncisional hernias after midline incision are commonly underestimated but probably complicate between 2 and 20% of all abdominal wall closures. The midline incision is the preferred incision for surgery of the upper abdomen despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia. A RCT was preformed in the pre-laparoscopic cholecystectomy era the data of which were never published.\n\nABSTRACT.METHODS:\nOne hundred and fifty female patients were randomly allocated to cholecystectomy through midline or transverse incision. Early complications, the duration to discharge and the in-hospital use of analgesics was noted. Patients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications such as fistula, wound dehiscence and incisional hernia after a minimum of 12 months follow-up.\n\nABSTRACT.RESULTS:\nTwo percent (1/60) of patients that had undergone the procedure through a transverse incision presented with an incisional hernia as opposed to 14% (9/63) of patients from the midline incision group (P = 0.017). Transverse incisions were found to be significantly shorter than midline incisions and associated with more pleasing appearance. More patients having undergone a midline incision, reported pain on day one, two and three postoperatively than patients from the transverse group. The use of analgesics did not differ between the two groups.\n\nABSTRACT.CONCLUSIONS:\nIn light of our results a transverse incision should, if possible, be considered as the preferred incision in acute and elective surgery of the upper abdomen when laparoscopic surgery is not an option.\n\nBODY.INTRODUCTION:\nThe rate of incisional hernia after midline incision is commonly underestimated but probably lies between 2 and 20% [1–5]. Thus, incisional hernia is a major postoperative problem. The treatment of incisional hernia is complicated by high rates of recurrences. Recently, in a randomised controlled trial published by Burger et al. [6], midline incisional hernia repair has been shown to be associated with a 10-year cumulative recurrence rate of 63 and 32% for suture and mesh repair, respectively. The midline incision is the preferred incision for surgery of the upper abdomen, despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia [7]. Various approaches to opening the abdomen have been advocated over time. The choice for a certain incision is dependent on the exposure necessary for the desired procedure to succeed. A midline incision, be it supraumbilical, infraumbilical or both, is an approach especially suited for emergency and exploratory surgery because of the quick and generous exposure that can be achieved within a few minutes [8, 9]. The avascular nature of the linea alba minimises blood loss during this procedure. A supraumbilical transverse incision may be utilised in case exposure of the upper abdomen is desired. During this incision, the damage inflicted to the segmental arteries and nerves is previously described as being minimal [10]. Previously, only one randomised controlled trial, comparing transverse and true midline incisions, has been published specifically addressing incisional hernia incidence [11]. To determine whether the use of a transverse incision is an alternative to a midline incision for open cholecystectomy in terms of incisional hernia incidence, surgical site infection, postoperative pain and hospital stay, this randomised controlled trial was performed. This trial was conducted in an era when laparoscopic cholecystectomy was not yet available. The possibility of low incisional hernia rates after transverse incisions and the fact that little is known about potential advantages incited us to publish the relevant results of this randomised controlled trial which has been performed in the past and has only been reported in a Dutch thesis by one of the authors (H.L.). The primary endpoint of this study was the incisional hernia incidence after 12 months of follow-up. Secondary endpoints included pain and cosmetic appearance.\n\nBODY.METHODS.PROTOCOL:\nSome 150 consecutive female patients were randomly assigned to a midline or transverse incision as an approach for elective cholecystectomy or combined cholecystectomy and cholangiography (with or without consecutive choledochotomy) (75 and 75 patients, respectively). Emergency procedures were excluded from participation. The sample size is based on an incisional hernia rate reduction from 20 to 6% at a power of 80% and an error rate of 5%. Obtaining informed consent was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975. The investigation reported was performed with informed consent from all of the patients and followed the guidelines for experimental investigation with human subjects and was approved by the medical ethics committee. An independent statistician prepared closed, tamper-proof envelopes containing the random allocation (Fig. 1). Patients were randomised for one of the procedures in theatre through the opening of the envelopes.Fig. 1Flow chart of patient inclusion and follow-up Patient-related factors that were recorded were age, body mass and length and date of operation. Operation-related factors that were recorded were the exact nature of the operation, length of the incision, the thickness of the subcutaneous fat, surgeon performing the procedure, as well as the duration of the operation (skin-to-skin time). In the immediate postoperative period, the use, dose and type of analgesics was recorded and a pain score was administered. The use of analgesics (morphine 7.5 mg intra-muscular injection, 4 h minimum interval between consecutive injections) was monitored for 48 h after surgery; the pain score was administered for the first 6 days after surgery. In patients assigned to surgery through a midline incision, the skin was incised from just below the xyphoid process to just above the umbilicus. The abdominal wall was opened in the midline by incising the linea alba. A Collin type (two-bladed) self-retaining retractor was used to maintain exposure. The abdominal wall was closed in one layer using single polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). The skin was consequently closed using running monofilament nylon sutures (Ethilon; Ethicon, Amersfoort, The Netherlands). Patients randomised for a transverse incision received a right-sided unilateral transverse incision between 3 and 4 cm below the costal margin. The rectus muscle was incised. The fibres of the external and internal obliques and the transverse muscles were separated in the direction of their course. Exposure was achieved through the use of a manually held single-bladed retractor. Closure of the abdominal wall was achieved by closure of the peritoneum and the posterior rectus fascia using a continuous, polygalactin 910 suture (Vicryl; Ethicon, Amersfoort, The Netherlands). The anterior rectus sheath and the fascia of the internal and external transverses were closed using simple interrupted polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). Towards the end of both procedures, a Redon low-vacuum drain catheter was placed, which was guided outside the abdominal cavity approximately 5 cm from the incision. The skin was consequently closed using continuous monofilament nylon suture (Ethilon; Ethicon, Amersfoort, The Netherlands). All patients received a dose of 5,000 IU of sodium–heparin on the morning of the procedure as thrombosis prophylaxis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Pearson χ2 test was used for comparing percentages. In case of small expected numbers, a Fisher's exact test was performed. Continuous variables were analysed using the Mann–Whitney test. A P-value of 0.05 or less (two-sided) was considered to be statistically significant. Means and medians are expressed ±standard deviation (SD).\n\nBODY.METHODS.FOLLOW-UP:\nPatients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications, such as fistula, wound dehiscence and incisional hernia, after a minimum of 12 months follow-up. The patient and the surgeon evaluated the cosmetic results independently and were asked to rate the scar as unsatisfactory, satisfactory or fine. Furthermore, the length and width of the scar was measured.\n\nBODY.RESULTS.STUDY GROUP:\nSome 150 consecutive patients were randomised for participation in this study during an inclusion period from April 1977 until July 1979. Seventy-five patients received a transverse incision and 75 patients a midline incision (Fig. 1). One patient was withdrawn from further follow-up after developing peritonitis and consequent acute respiratory distress syndrome (ARDS) not related to the closure of the abdominal wall 2 days after surgery (transverse incision group). The patients' average age was 51.9 and 51.4 years for the midline and the transverse incision groups, respectively. Furthermore, no differences were found in the body mass and average length between the two groups (Table 1). A cholecystectomy was performed using a transverse incision in 52 patients and utilising a midline incision in 52 patients also. Fifteen and 16 patients, respectively, underwent a combined cholangiography/cholecystectomy. A further 7 and 6 patients, respectively, were treated with a cholangiography/cholecystectomy plus additional choledochotomy and the postexploratory placement of a T-tube.Table 1Baseline characteristics of the patients undergoing surgery, according to study groupVariableMidline incisionTransverse incisionn = 75n = 74Average age (years) ± SD51.9 ± 14.851.4 ± 13.8Average weight (kg) ± SD71.3 ± 14.568 ± 14.3Average length (cm) ± SD163.5 ± 7.8164 ± 7.3\n\nBODY.RESULTS.SURGEON:\nStaff surgeons performed 17% (13/75 patients) of all procedures performed through a midline incision. The remainder of the procedures through a midline incision was carried out under staff surgeon supervision. Staff surgeons performed 14% of all procedures in the transverse incisions study group (10/74 patients) and supervised the remainder. No statistically significant difference was found between the two randomised groups (P = 0.65).\n\nBODY.RESULTS.DURATION OF SURGERY:\nNo significant difference was noted in the skin-to-skin time (in min) for the two different incisions (Table 2). Surgery utilising midline and transverse incision took 56.9 ± 29.3 and 53.2 ± 26.8 min, respectively (P = 0.35). The total duration of the procedures until extubation (in min) did not differ between the midline and transverse incisions (71.0 ± 30.5 and 67.0 ± 27.3, respectively, P = 0.34).Table 2Length of incision, thickness of subcutaneous fat and skin-to-skin time, according to study groupVariableMidline incisionTransverse incisionP-valueLength of incision (mm) ± SDa164 ± 28140 ± 24<0.0001Thickness of subcutaneous fat (mm) ± SDa34.5 ± 13.030.3 ± 12.40.05Skin-to-skin time (min) ± SDa56.9 ± 29.353.2 ± 26.80.40Width of scar (mm) ± SDb8.3 ± 1.43.3 ± 1.2<0.0001aMeasured during surgery in 75 midline and 74 transverse incisionsbMeasured at follow-up in 63 and 60 midline and transverse incisions, respectively\n\nBODY.RESULTS.PAIN AND ANALGESICS:\nSignificantly more patients, having undergone a midline incision, reported pain on day one, two and three postoperatively (P < 0.0001, Table 3). In the midline incision group, 28/75 patients required no or only one dose of analgesics; the remainder required two or more doses. Thirty-one patients operated through a transverse incision required no analgesics or only one dose; 43 patients (the remainder) required two or more. No significant difference in the use of analgesics was found between the groups (P = 0.69).Table 3Postoperatively reported pain, according to study group, shown as the number of patients reporting pain at the time points indicated (percentage), with the remainder of patients reporting no painTime point after surgeryMidline incision n = 75Transverse incision n = 74P-valuePatients reporting pain, n (%)Patients reporting pain, n (%)3–4 h68 (91)60 (81)0.09First day64 (85)39 (53)<0.0001Second day57 (76)23 (31)<0.0001Third day28 (37)9 (12)<0.0001Fourth day5 (7)3 (4)0.72Fifth day0 (0)1 (1)0.50Sixth day0 (0)1 (1)0.50\n\nBODY.RESULTS.COMPLICATIONS:\nPostoperative complications (Table 4) were seen in 16 out of 75 patients (21%) from the midline incision group and in 15% from the transverse incision group (11 patients) (P = 0.30). Briefly, one patient in each group developed cardiac complications; 8 and 6 patients developed urinary retention after the midline and transverse incisions, respectively (P = 0.59). Surgical site infections were diagnosed in 7 and 3 patients, respectively (P = 0.33).Table 4Rate of complications after surgery, according to study group, shown as the number of patients diagnosed with complications (percentage)ComplicationMidline incisionTransverse incisionP-valuen = 75 n (%)n = 75 n (%)Cardiac1 (1)1 (1)1Urinary retention8 (12)6 (8)0.59ARDS01 (1)0.50Surgical site infection7 (9)3 (4)0.33Haemorrhage1 (1)00.50Pneumonia01 (1)0.50Total17 (23)12 (16)0.30\n\nBODY.RESULTS.DISCHARGE:\nForty-five (60%) and 42 (57%) patients from the patients having undergone a midline or a transverse incision, respectively, were discharged on day 6 or 7 postoperatively. The remaining patients from each group left hospital care on day 8 or later. The duration of hospital admission did not differ between the two types of incision (P = 0.74).\n\nBODY.RESULTS.COSMETICS:\nThe width and length of all incisions was measured during the follow-up visit (Table 2). The mean width of the scar after the healing of the midline incisions was found to be 8.3 ± 1.4 mm. The mean width of the scar after the healing of the transverse incisions was measured to be 3.3 ± 1.2 mm. This observed difference is significant (P < 0.0001). The length of the incisions was 140 ± 24 mm and 164 ± 28 mm for the transverse and the midline incisions, respectively. The difference in scar length was found to be significant (P < 0.0001).\n\nBODY.FOLLOW-UP:\nEighty-one percent of all patients operated through a transverse incision were seen during the follow-up examination (n = 60). Of the patients operated through a midline incision, 63 out of 75 were seen at the outpatient clinic (84%). The patients that were lost to follow-up could either not be traced or had deceased (Fig. 1). The minimum follow-up for the evaluation of cosmetic results and hernia incidence was 12 months and the maximum was 36 months.\n\nBODY.FOLLOW-UP.INCISIONAL HERNIA:\nFrom the patients that had undergone the procedure through a transverse incision, one (1/60; 2%) presented with an incisional hernia as opposed to 9 patients from the midline incision group (9/63; 14%); 95% confidence interval (CI) 7.5–25.4%. This difference in hernia incidence is significant (P = 0.017). No significant correlation was found between the incisional hernia rate and surgical site infection (P = 0.07).\n\nBODY.FOLLOW-UP.SUBJECTIVE COSMETICS:\nPatients and surgeons alike were asked to rate the appearance of the scar during the postoperative follow-up outpatient clinic visit. Both the surgeons and the patients found the scar resulting from the transverse incision to be more cosmetically pleasing (P < 0.0001 and P = 0.03, respectively, Table 5).Table 5Number of patients and surgeons rating the cosmetics of a scar at follow-upScoreMidline incision (n = 63)Transverse incision (n = 60)Patients, n (%)Surgeons, n (%)Patients, n (%)Surgeons, n (%)Unsatisfactory6 (10)25 (40)2 (3)6 (10)Satisfactory16 (25)27 (43)9 (15)12 (20)Fine41 (65)11 (17)49 (82)42 (70)Total63636060Difference between type of incision: patients P = 0.03; surgeons P < 0.0001\n\nBODY.DISCUSSION:\nThis prospective randomised study of transverse and midline incisions for open cholecystectomy shows that a significant reduction of incisional hernia incidence can be achieved through the use of a transverse incision. Only one other study (published in 1980) reported the incidence of incisional hernia after upper abdominal midline and unilateral transverse incision in a randomised trial. No difference between the two techniques (8 and 6% incisional hernia, respectively) was found, but the relatively short follow-up of 6 months, however, may be held accountable for this finding [11]. Three retrospective studies showed rates of incisional hernia of 3.2, 5.4 and 16.5% for midline incision and 1.3, 6.7 and 13.4% for transverse incision without statistically significant differences [12–14]. The possible reason for the rather high incidence of incisional hernia in the midline incision group (14%) may lie in the use of resorbable 910 polygalactin sutures. Nevertheless, the use of the same type of resorbable suture in the closure of the transverse incisions resulted in a 2% hernia rate. There is evidence for the importance of proper technique and choice of incision as a means to reduce incisional hernia being more important than the use of suture material [7]. Furthermore, as mentioned above, it is known that the incidence of incisional hernia in the case of a midline incision lies between 2 and 20%. From our data, the NNT (numbers needed to treat) is calculated to be 8 (95% CI 5–30) and the RRR (relative risk reduction) is 88% (95% CI 23–100%). Luijendijk et al. [15] have published a hernia rate of 2% after Pfannenstiel incisions closed using 910 polygalactin, which is in agreement with our findings in the patients randomised for a transverse incision, emphasising the importance of the incision over the choice of suture material. In our study, significantly fewer patients reported pain on day 1, 2 and 3 after transverse incisions, a result that was also described by other authors [16, 17]. Greenall et al. [18] published a contradictory report (in 1980) in which no significant difference in postoperative pain was found between midline and transverse incisions. The previously mentioned study, however, only analysed 46 out of 572 patients (8%) with regard to pain, which may explain the finding. In the same way, Lacy et al. suspended visual analogue pain scoring in a study comparing midline and transverse incisions for abdominal aortic surgery. Remarkably, the two groups in our study did not differ in terms of postoperative analgesia, a finding that is also reported by Lindgren et al. [17] and Donati et al. [19]. In our study, surgeons as well as patients were significantly more satisfied with the aesthetic appearance after a transverse in comparison with a midline incision. The scars after transverse incisions were found to be significantly shorter and less wide than the midline incisions, which may account for the observed difference. A possible reason for this is that a transverse incision is executed parallel to the prevailing direction of the skin lines on the abdomen and, therefore, the tension on the wound and consequent scar is low. Cholecystectomy has come a long way since this trial. The introduction and widespread acceptance of laparoscopic technique as the treatment of choice has rendered open cholecystectomy to be an operation for exceptional, and perhaps surgically difficult, circumstances. Nowadays, the study reported is hardly feasible, yet, the results are still applicable and very relevant for other surgical procedures in the (upper) abdomen. Knowledge of the favourable results of a transverse incision may aid surgeons in their choice when finding themselves in the unfortunate position of needing conversion to open cholecystectomy. In conclusion, this investigation on transverse incisions might be helpful in reducing the incidence of incisional hernia in patients after open cholecystectomy. The midline incision is a preferred manner to achieve exposure of the abdominal cavity and is considered to be easily performed and quick. Although the midline incision is generally accepted, the incidence of incisional hernias is surprisingly high [1–5]. The choice for a particular incision should not only be based on exposure, but also on hernia incidence reduction, especially since recurrence rates after hernia repair are reported to be very high. Furthermore, the recurrence rate after incisional hernia repair is a disappointing 63 and 32% for suture and mesh repair, respectively [6]. In the light of these results, incisional hernia prevention is warranted. In this investigation, it is shown that a significant reduction (from 14.5 to 1.7%) of incisional hernia incidence was achieved by using a transverse incision. Hence, a transverse incision should be considered as the preferred incision in acute and elective surgery of the upper abdomen in which laparoscopic surgery is not an option. Full exposure of two quadrants is feasible through the use of a unilateral transverse incision in, for example, biliary, bariatric, liver and colonic surgery. The transverse incision should be part of the abdominal surgeon's armamentarium and is a preferable incision to prevent the high incidence of incisional hernia after abdominal surgery.\n\n**Question:** Compared to Midline incision what was the result of Transverse incision on Pain on day one after surgery?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
384
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of Bladder Training and/or Tolterodine in Female Patients with Overactive Bladder Syndrome: A Prospective, Randomized Study\n\n ABSTRACT:\nWe compared the effects of bladder training and/or tolterodine as first line treatment in female patients with overactive bladder (OAB). One hundred and thirty-nine female patients with OAB were randomized to treatment with bladder training (BT), tolterodine (To, 2 mg twice daily) or both (Co) for 12 weeks. Treatment efficacy was measured by micturition diary, urgency scores and patients' subjective assessment of their bladder condition. Mean frequency and nocturia significantly decreased in all treatment groups, declining 25.9% and 56.1%, respectively, in the BT group; 30.2% and 65.4%, respectively, in the To group; and 33.5% and 66.3%, respectively in the Co group (p<0.05 for each). The decrease in frequency was significantly greater in the Co group than in the BT group (p<0.05). Mean urgency score decreased by 44.8%, 62.2% and 60.2% in the BT, To, and Co groups, respectively, and the improvement was significantly greater in the To and Co groups than in the BT group (p<0.05 for each). Although BT, To and their combination were all effective in controlling OAB symptoms, combination therapy was more effective than either method alone. Tolterodine alone may be instituted as a first-line therapy, but may be more effective when combined with bladder training.\n\nBODY.INTRODUCTION:\nOveractive bladder (OAB), defined as increased frequency and urgency with or without urge incontinence, is a very common condition, especially in middle-aged women, with a reported incidence of approximately 17% of the general female population (1, 2). Since many women may consider OAB as a natural aging phenomenon and thus do not seek treatment, its actual prevalence may be even higher. Despite its common occurrence, however, our understanding (3-5) of the pathophysiology underlying the development of OAB has remained elementary. It is managed most commonly by drug therapy, followed by various behavioral interventions and, less frequently, by surgical methods. Each of these methods, however, has variable efficacy and adverse effects. Classically, urge incontinence has been considered most important, being the key index in treatment outcome assessments. However, since the new consensus by the International Continence Society on the diagnosis of OAB (6), the presence or absence of actual urine leakage was regarded as less important. With the adaptation of this new definition, the number of patients newly diagnosed with OAB is expected to increase, emphasizing the need to develop first-line interventions for their symptoms. The three most commonly employed methods for treating newly-diagnosed OAB are bladder training, anticholinergic therapy and a combination of the two. Here we report a controlled, randomized study comparing the clinical efficacy of these three methods, for the purpose of determining the most appropriate first-line therapeutic modality.\n\nBODY.MATERIALS AND METHODS:\nBetween May 2001 and April 2002, 139 women presenting with symptoms of urgency and frequency, with or without urge incontinence, were prospectively enrolled in this study. Inclusion criteria included being 18 yr of age or older, having a urination frequency of 8 or more times per day, the presence of urge symptoms that may or may not accompany incontinence, symptom duration of 3 months or longer and no prior history of treatment for OAB. We excluded patients with active urinary tract infection, clinically significant stress urinary incontinence, bladder outlet obstruction, interstitial cystitis, glaucoma or megacolon. Also excluded were patients with a maximal urine flow rate of less than 10 mL/sec or a postvoid residual urine amount that was more than 30% of the total amount voided on uroflowmetry. Upon enrollment, patients were randomly assigned to bladder training (BT, n=46), tolterodine treatment (To, 2 mg twice daily, n=47) or a combination of the two (Co, n=46). Each patient received a physical examination and uroflowmetry with postvoid residual urine scan. In addition, each patient kept a record of urgency scores, with 0 indicating no symptoms, 1 rarely, 2 occasionally, 3 often, and 4 all the time, and a frequency-volume chart. At 12 weeks, satisfaction score was assessed, with 0 representing fully satisfied, 1 much improved, 2 slightly improved, 3 no improvement, and 4 worsened; a change of 2 points or more was considered symptom improvement. The frequency-volume chart and a survey of adverse effects of medication were repeated every 4 weeks. Patients in the BT and Co groups were educated of the bladder drills by a nurse specialist. Gross and functional anatomy of the lower urinary tract and pelvis were demonstrated, with instructions on how bladder drills can increase functional bladder capacity and actual voiding habits. Each patient was instructed to examine her frequency-volume chart to determine the longest interval she could initially hold urination and sustain urge symptoms. Each patient was then taught to increase this interval by 15-min increments, with the aim of achieving a 3 to 4 hr interval and a voided volume of 300 to 400 mL. In addition, whenever urgency occurred, patients were educated to perform Kegel exercises for symptom alleviation. All bladder drill education was performed by one nurse specialist, who telephoned each patient every 2 weeks to make sure she was following the exact drill taught. In the Co group, the bladder drill was started at the same time as the start of medication. Of the 139 patients, 89 (64.0%) completed the 12 week treatment regimen and were subjects of outcome analyses. Voiding frequency, nocturia, urgency score before and after the treatment, satisfaction score at 12 weeks and adverse events were measured in the three groups. Student's t-test was used for statistical analysis, and p<0.05 was defined as statistically significant.\n\nBODY.RESULTS:\nPatient characteristics, including age, weight, symptom severity and duration, parity, pretreatment uroflowmetry profile, presence and degree of urge or stress incontinence, and urgency score, were similar among the three treatment groups (Table 1). At 12 weeks, 26 patients (56.5%) in the BT group, 32 (68.1 %) in the To group, and 31 (67.4%) in the Co group had completed the protocol. Mean frequency in the three groups decreased significantly, from 10.9 to 8.1 (25.9%) in the BT group, from 11.6 to 8.1 (30.2%) in the To group, and from 11.9 to 7.9 (33.5%) in the Co group (p<0.05 each) (Fig. 1). Nocturia also decreased significantly in the three groups, from 1.5 to 0.6 per night (56.1%) in the BT group, from 1.7 to 0.6 (65.4%) in the To group, and from 2.0 to 0.6 (66.3%) in the Co group (p<0.05 each) (Fig. 2), as did urgency scores, from 2.6 to 1.4 (44.8%) in the BT group, from 2.8 to 1.1 (62.2%) in the To group, and from 3.0 to 1.2 (60.2%) in the Co group (p<0.05 each) (Fig. 3). When we compared these changes among the three groups, we found that patients in the Co group showed greater improvements in frequency and urgency scores than did patients in the BT group (p<0.05 each), and that urgency scores in the To and Co groups showed significantly greater improvement than in the BT group (p=0.017 and p=0.021, respectively). No difference was observed between the To and Co groups. Satisfaction scores at 12 weeks did not differ among the three groups, being 1.5 in the BT group, with improvement in 53.9%; 1.4 in the To group, with improvement in 63.0%; and 1.3 in the Co group, with improvement in 71.0%. No adverse events were reported from the BT group, whereas 13 patients (40.6%) in the To group and 12 (38.7%) in the Co group reported one or more adverse events (Table 2). Most of the events were mild and well-tolerated, except that severe dry mouth occurred in 2 patients from each group, which led them to discontinue medication (withdrawal rates, 6.3% and 6.5%, respectively).\n\nBODY.DISCUSSION:\nDespite extensive research on the complicated interactions involving the neural network of the cerebrum, sacral cord and the detrusor, little is known regarding the pathologic process that occurs during the development of OAB (4, 5, 7). Accordingly, therapeutic approaches for OAB have been diverse, consisting largely of medical therapy, but also including behavioral interventions such as bladder training, pelvic floor muscle exercises with or without biofeedback, and electrical stimulation. Other therapeutic approaches have included neuromodulation of the sacral nerves, intravesical instillations of antimuscarinics and bladder overdistention, with augmentation cystoplasty remaining as the last resort. Naturally, non-invasive methods, including medical therapy and behavioral intervention, have been the initial treatments of choice for OAB symptoms. Although it is not known how bladder drills control and sustain OAB symptoms, it is believed that ill-trained voiding habits leading to decreased bladder compliance by frequent voiding are corrected by progressively increasing the voiding interval, thus increasing the voided volume and the functional bladder capacity. Retraining these patients to void at normal intervals would restore compliance when capacity increases and may contribute to symptom improvement (8-11). Observations on the electrical stimulation of the pelvic floor muscles (12) have shown that, during electrical stimulation, hyperactivity of the bladder was either diminished or completely abolished. This mechanism has been exploited as part of bladder training regimens. Clinically, bladder training has been shown to be effective in the management of urgency and urge incontinence (8, 13-15). When bladder training was compared with medical therapy (200 mg flavoxate hydrochloride plus 25 mg imipramine three times daily for 4 weeks), urge incontinence was completely controlled in 84% of patients in the BT group, with symptom relief in 76%, compared with 56% and 48%, respectively, in the medication group (16). A larger study (17), however, found that addition of behavioral interventions did not seem to offer any additional benefit to medical therapy (tolterodine) after 6 months. Urgency episodes and daytime frequency decreased 83% and 27.3%, respectively, in the tolterodine group, compared with 78.7% and 23%, respectively, in the combination group. Improvements in patients' perception of urinary symptoms were reported by 85.9% of the tolterodine group and 81.7% of the combination group. We found that all patients, a homogeneous group of female patients, regardless of treatment methodology, reported significant improvements in all voiding parameters examined, but that the addition of medication to BT resulted in additional improvements in urgency and frequency. Our finding of better outcomes in the Co than in the To group may be due to the bladder training program and design used here, which was a short-term intensive course of bladder drills. As many authors have emphasized, patients' understanding and compliance are the key factors in bladder training. Accordingly, the exercise program that we used did not simply require patients to contract and relax their pelvic floor muscles for certain periods of time. Instead, patients were given specific goals of 15 min each in sustaining contractions and were instructed how to increase these intervals. Having such a specific aim of what to accomplish with each exercise must have been enforcing and rewarding as well. In addition, repeated instructions, confirmation and reassurance were essential and constantly encouraged patients to stay on the right track for 12 weeks. Similar results have been reported (18), in which patients were provided with written information sheets describing the aims and endpoints of therapy as well as specific instructions on bladder drills. Although there were no formal bladder training sessions or additional follow-up for those in the bladder training-only group, the self-explanatory information sheets the patients kept would have reinforced and encouraged patients while on bladder drills. Thus, patients on medication plus bladder training reported significantly better results after therapy than did patients on medication alone. In the previous study, in which the drills were simplified, it may have facilitated patient adherence to treatment, but without any further reinforcements, the therapeutic benefits of BT may have been compromised (17). In this study, we found that patient adherence to treatment was lowest and discontinuation rate highest in the BT group, although the treatment efficacy results were similar at the end of 12 weeks. This may be attributed to the longer duration of patient maneuvering required before experiencing significant improvement in this BT-only group. Thus, despite many efforts by the physician to make patients stay on the correct treatment regimen, it was evidently easier for patients to take medication twice a day than to repeat exercises many times a day for months. We noted no adverse effects in the BT group, whereas, in the other 2 groups, the most common adverse effect was dry mouth. We found that 6.3% of patients in the BT group and 6.5% in the Co group had to discontinue medication due to severe dry mouth, percentages lower than those previously reported (19, 20). These results indicate that 4 mg/day tolterodine is safe and does not require dose adjustment. From these results, we conclude that bladder training, tolterodine and the combination of the two are all effective in controlling OAB symptoms, with comparable patient satisfaction. Since a short-term intensive course of bladder drill is as effective as medical or combined therapy in all voiding symptoms, but does not have any adverse effects, bladder training can be safely and effectively employed as first line treatment for OAB. In patients who have predominantly symptoms of frequency or urgency, tolterodine may be more effective, especially when combined with bladder training.\n\n**Question:** Compared to Bladder training only (BT) what was the result of Tolterodine only (To) on Urgency score improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
259
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Impact and Process Evaluation of Integrated Community and Clinic-Based HIV-1 Control: A Cluster-Randomised Trial in Eastern Zimbabwe\n\n ABSTRACT.BACKGROUND:\nHIV-1 control in sub-Saharan Africa requires cost-effective and sustainable programmes that promote behaviour change and reduce cofactor sexually transmitted infections (STIs) at the population and individual levels.\n\nABSTRACT.METHODS AND FINDINGS:\nWe measured the feasibility of community-based peer education, free condom distribution, income-generating projects, and clinic-based STI treatment and counselling services and evaluated their impact on the incidence of HIV-1 measured over a 3-y period in a cluster-randomised controlled trial in eastern Zimbabwe. Analysis of primary outcomes was on an intention-to-treat basis. The income-generating projects proved impossible to implement in the prevailing economic climate. Despite greater programme activity and knowledge in the intervention communities, the incidence rate ratio of HIV-1 was 1.27 (95% confidence interval [CI] 0.92–1.75) compared to the control communities. No evidence was found for reduced incidence of self-reported STI symptoms or high-risk sexual behaviour in the intervention communities. Males who attended programme meetings had lower HIV-1 incidence (incidence rate ratio 0.48, 95% CI 0.24–0.98), and fewer men who attended programme meetings reported unprotected sex with casual partners (odds ratio 0.45, 95% CI 0.28–0.75). More male STI patients in the intervention communities reported cessation of symptoms (odds ratio 2.49, 95% CI 1.21–5.12).\n\nABSTRACT.CONCLUSIONS:\nIntegrated peer education, condom distribution, and syndromic STI management did not reduce population-level HIV-1 incidence in a declining epidemic, despite reducing HIV-1 incidence in the immediate male target group. Our results highlight the need to assess the community-level impact of interventions that are effective amongst targeted population sub-groups.\n\nBODY.INTRODUCTION:\nHIV-1–prevalence declines may now be occurring in some sub-Saharan African countries [1]. However, there remains little direct evidence that prevention measures—rather than natural HIV-1 epidemic dynamics [2] or behaviour change prompted by mortality [3]—have contributed to the slowing of HIV-1 epidemics [4,5]. Syndromic management of sexually transmitted infections (STIs) proved effective early in an HIV-1 epidemic in north-west Tanzania [6]. Peer education to promote safe behaviours showed promise in early process evaluations [7], but a randomised controlled trial (RCT) of factory workers in Harare, Zimbabwe, done in the mid-1990s, proved inconclusive [8]. Subsequent RCTs of syndromic management [9] and mass treatment of STIs [10], together with an information, education, and communication (IEC) behaviour-change programme [9], showed no effect in more mature epidemics. Integrated implementation of synergistic community-based HIV-1 control strategies could be a more cost-effective and sustainable approach to HIV-1 prevention than parallel application of vertical (top-down) programmes [11]. One scientific evaluation of such a strategy has been reported in which a combination of IEC activities amongst the general population and syndromic STI management showed no impact on HIV-1 incidence at the population level [9], although participation in the IEC activities was associated with reduced HIV-1 infection in women [12]. We conducted a cluster-RCT to test the hypothesis that integrated implementation of combined community- and clinic-based HIV-1 prevention, in which IEC activities focus primarily on high-risk populations, can be feasible and effective in reducing HIV-1 incidence in a major maturing epidemic in eastern Zimbabwe (Protocols S1 and S2; Text S1 and S2).\n\nBODY.METHODS.PARTICIPANTS AND RANDOMISATION PROCEDURE:\nThe study communities comprised six pairs of communities matched by socio-economic type—small town, tea/coffee estate, forestry plantation, roadside trading settlement, and subsistence farming area (two pairs) (Figure 1). Each community included at least one Government or Mission health centre. It was anticipated that HIV-1 incidence would be similar within each pair of communities. Within each pair, one community was assigned at random (un-blinded coin toss by a Ministry of Health official witnessed by programme and research personnel) to receive the additional intervention and the other to be the control. These procedures were designed to ensure that Mission, non-governmental organisation, and private sector programmes (for details, please refer to the following section) would be distributed evenly between intervention and control sites. Figure 1Location of Intervention and Control Communities in Manicaland Province, Eastern Zimbabwe We assessed the effect of the intervention using results from laboratory tests for HIV-1 infection and questionnaire data collected in the baseline and 3-y follow-up rounds of a population-based, closed-cohort survey. The 12 study communities were enumerated in a phased manner, with paired communities being enumerated consecutively to minimise the effects of any seasonal factors. HIV-1–prevention activities were commenced in each intervention community shortly after completion of the baseline survey in that community. In each community, individuals eligible for the study were identified in the first round using data from household listings prepared in an initial census. All males and females aged 17–54 y and 15–44 y at last birthday (the age groups expected to have the highest incidence of HIV infection), respectively, who had slept in a household in the community for at least four nights in the previous month, and who had also done so at the same time 1 y earlier, were considered eligible for the study. In heterosexually driven HIV-1 epidemics, risk of infection can be correlated amongst marital partners [13]. Therefore, to maximise statistical power to detect differences in HIV-1 incidence, enrolment was restricted to one randomly selected member per marital group.\n\nBODY.METHODS.INTERVENTIONS:\nIntervention and control communities were to receive standard Government services including basic syndromic STI management, condom distribution from health clinics and Zimbabwe National Family Planning Council outlets, home-based care, and limited HIV/AIDS–focussed IEC activities (e.g., occasional AIDS-awareness meetings and distribution of posters and leaflets). In addition, social marketing of male and female condoms would be provided through an ongoing national programme [14]. The intervention comprised targeted and population-level strategies to promote safer sexual behaviour and to improve treatment of STIs that facilitate HIV-1 transmission. The intervention strategies were implemented by two local non-governmental organisations (Family AIDS Caring Trust and the Biomedical Research and Training Institute) and the Zimbabwe Ministry of Health and Child Welfare through an integrated programme of community- and clinic-based activities. Integration of the individual programme components was achieved through the joint involvement of the participating agencies in the planning and implementation of activities and through the inclusion of biomedical and behavioural aspects within each component. The programme design comprised three key components: (1) peer education and condom distribution amongst commercial sex workers and male clients at workplaces and in the general community, supported by income-generating projects; (2) strengthened syndromic management of STI services at local health centres; and (3) open days with HIV/AIDS IEC activities at health centres to promote safer sexual behaviour and to increase the uptake of local STI treatment services. The peer-education component was based on a model which had been developed by the Project Support Group at the University of Zimbabwe [7] and which had been widely implemented within Zimbabwe and neighbouring countries. Activities were held weekly at workplaces and at locations within the general community (e.g., beer halls and markets) where casual relationships were most frequently formed [15]. The target population comprised sex workers and male clients who form a bridge population in HIV transmission [16] between sex workers and the monogamous (or serial monogamous) majority of women [17,18]. It was posited that the high HIV-1 incidence observed amongst young women could be reduced by altering the behaviour of their older male partners whose own behaviour was intrinsically more risky [19]. The behavioural component would be reinforced in counselling sessions with STI patients and through micro-credit income-generating projects to reduce unmarried women's dependence on commercial sex work. The micro-credit scheme consisted of small interest-free loans repayable over 10 mo, provided to groups and to individuals together with training in small-business management. The targeted activities would be extended to the general population through open days held at local health centres. Besides providing basic HIV/AIDS information, it was envisaged that programme meetings and activities, by their continuous nature, would sustain high levels of awareness of the risks of HIV transmission and would facilitate renegotiation of community social norms, making safer behaviours easier to adopt. The key messages of the programme were: (1) remain faithful to one regular sexual partner; (2) use condoms consistently with any casual sexual partners; and (3) seek prompt and effective treatment for any STIs. Syndromic management of STIs at primary healthcare centres was first introduced in Zimbabwe in the 1980s [20] and formed the basis of STI diagnosis and treatment services at baseline in the intervention and control communities. It was envisaged that these services could be strengthened and made more effective through a programme of regular classroom training and on-site supervision of nursing staff, through the introduction of training in systemic counselling for STI patients, and through the provision of small quantities of treatment drugs to cover delays in routine supplies. Quality-assurance procedures applied in the intervention communities included pre- and post-training tests for peer educators and, for nursing staff, attending the syndromic STI management and systemic counselling courses, regular on-site supervision (including random spot checks) and training, refresher courses, routine planning meetings and monitoring of service statistics, and quarterly workshops where detailed programme procedures were reviewed and updated. An interim qualitative process evaluation of intervention activities was conducted during the inter-survey period, and a report on the findings was provided to the implementing organisations.\n\nBODY.METHODS.OUTCOME AND PROCESS MEASURES:\nThe primary outcome of the study was HIV-1 incidence at the community level amongst individuals who were uninfected at baseline. Blood was collected onto Whatman No. 3 filter paper and transported to the Biomedical Research and Training Institute laboratory in Harare. Blood spots were air dried at 4 °C and, for long-term (>1 mo) storage, were kept at −20 °C. For baseline studies, blood was eluted into phosphate-buffered saline, and antibodies to HIV were detected using a dipstick dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, http://www.path.org; produced locally in Thailand]) and a standard protocol [21,22]. All positive results and a 10% sample of negative results were confirmed using a plate EIA (Abbott Third-Generation HIV-1/HIV-2 EIA [http://www.abbott.com] or Genelavia MIXT HIV-1/HIV-2 [Sanofi Diagnostics Pasteur, Marnes La Coquette, France]). At follow-up, a similar protocol was followed. Only the samples from those participants recorded as being HIV seronegative at baseline were tested at follow-up, again using a dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, produced locally in India]). Where seroconversion was indicated, the frozen stored baseline sample was retested to confirm the original negative result using the same dot EIA test. Where the baseline result remained negative, the Abbott EIA test was used to confirm both baseline and follow-up results. The change in place of manufacture of the dot EIA and the exclusive use of Abbott test kits to confirm positive sera at follow-up was due only to changes in the supply of test reagents, and not to perceived changes in sensitivity or specificity [23]. Apart from the principal investigators (based in Harare, London and Oxford) and those nurses given permission by participants requesting voluntary counselling and testing (VCT), all research personnel remained blind to the HIV-1 status of individual participants. Secondary outcomes, measured at the community and individual level, were self-reported genital ulcers and urethral or vaginal discharge in the past year (STI cases), STI treatment effectiveness (self-reported cessation of symptoms), indicators of sexual and health-seeking behaviour change, and HIV/AIDS knowledge. The behaviour-change variables assessed were sexual debut, sexual partner change in the past year, non-regular partnerships in the past month, and unprotected sex with regular and casual partners in the past 3 y. The data on sexual partnerships and condom use were collected using the Informal Confidential Voting Interview method for 75% of respondents selected at random in the first round of the survey. This method includes procedures to build rapport, ensure a non-judgemental interview approach, and provide reassurance that there are no right or wrong answers to questions of a personal nature, and uses a simple secret voting-box system to reduce embarrassment and guarantee confidentiality in low-development settings [18]. Its use has been shown to be associated with greater disclosure of socially proscribed behaviour in the study population [24]. Process indicators examined comprised changes in knowledge and psychosocial status and indicators of programme coverage and quality.\n\nBODY.METHODS.SAMPLE-SIZE CALCULATIONS:\nInitial sample-size calculations assumed 20% HIV-1 prevalence at baseline, 30% loss to follow-up after 2 y, and 80% power to detect a 40% reduction in HIV-1 incidence in the intervention communities compared with control communities, assuming a background yearly incidence of 2%. Based on six pairs of communities and a co-efficient of variation between communities of 0.15, the required sample size in each community was 1,000. Funding constraints and slower than anticipated implementation of intervention activities led to revisions of the sample size for each community to 800 and the length of follow-up to 3 y, respectively. Assuming a proportionate increase in loss to follow-up to 41%, these arrangements also yielded 80% power to detect a 40% reduction in HIV-1 incidence.\n\nBODY.METHODS.STATISTICAL METHODS:\nTo test the randomisation with small numbers of communities, HIV-1 prevalence, STI history, and socio-demographic characteristics were compared at baseline for study participants in the intervention and control communities, together with uptake of STI treatment and VCT services offered at baseline. Outcome and process indicators were compared for intervention versus control communities. Analysis of the primary outcome was on an intention-to-treat basis. Incident events and person-years at risk of seroconversion were used to calculate HIV-1 incidence rates and unadjusted and adjusted incidence rate ratios (IRR) with 95% confidence intervals (CIs) for each pair of communities. Adjustment was made for sex, 3-y age group, and community-level baseline HIV prevalence. The overall IRRs (unadjusted and adjusted) were taken to be the geometric means of the IRRs for the six pairs of communities. We calculated 95% CIs for each geometric mean as geometric mean ± 1.96 × standard error of the geometric mean. Paired student t-tests on the logarithms of the pair-specific IRRs were used to test whether these differed significantly from unity [25]. The coefficient of variation between communities was calculated based on baseline HIV prevalence using a standard procedure for pair-matched studies [26]. Analyses of prevalence for secondary outcome and process variables were conducted separately for male and female respondents seen at both survey rounds by fitting logistic regression models to the individual-level data and adjusting for community pair and, where available, value of variable at baseline. Since most programme activities were targeted and overall coverage of programme activities was therefore limited, sub-group analyses, adjusted for community pair, were done for HIV-1 incidence and behavioural outcomes to assess the individual-level effects of attendance at programme meetings. Data were entered and validated using SPSS-PC (http://calcnet.mth.cmich.edu/org/spss/index.htm) and data analysis was conducted in Stata version 7 (http://www.stata.com). Statistical tests were double-sided and results were taken to be significant at the 5% level.\n\nBODY.METHODS.ETHICAL APPROVAL:\nAll study participants in the intervention and control communities were offered free VCT for HIV-1, an information sheet on HIV/AIDS, results from a diagnostic test for Trichomonas vaginalis [27] (done at baseline only), and free treatment for T. vaginalis and other STIs from a research nurse. Testing and treatment for T. vaginalis was provided because the prevalence of other curable STIs was low in the study areas [22]. Antibodies reactive with T. vaginalis were detected in DBS eluates following a previously described procedure [27,28]. Written informed consent was sought as a condition of enrolment and continuation in the study. Prior ethical approval was obtained from the Research Council of Zimbabwe, number 02187; the Applied and Qualitative Research Ethics Committee in Oxford, United Kingdom, N97.039; and the UNAIDS Research Ethics Committee, ERC 98/03.\n\nBODY.RESULTS.PARTICIPANT FLOW:\nIn round 1 of the census (July 1998 to February 2000), 5,943 and 6,037 eligible individuals in the intervention (total population size 18,104) and control (18,633) communities, respectively, were selected for recruitment into the study cohort (Figure 2). In round 2, 3 y later (July 2001 to February 2003), 1,044 (23%) and 1,144 (26%) of baseline respondents who were still alive had migrated away from the intervention and control communities, respectively, and were therefore lost to follow-up (Figure 2). At both baseline and follow-up, migrants and non-migrants had similar risks of HIV-1 infection and associated behaviour [29]. Of those still resident in the intervention and control communities, 2,664 (75%) and 2,564 (77%), respectively, were interviewed and blood samples taken for a second time. Temporary absence from the usual place of residence was the main reason for non-participation in the intervention (n = 794, 95%) and control (n = 698, 94%) communities. The overall proportions of baseline respondents followed up at the end of the study were 55% and 56% in the intervention and control communities, respectively. The median follow-up of communities was 3.0 y (range of median within communities, 3.0–3.1). Figure 2Flow-Chart Comparing Participation and Follow-Up Rates in the Intervention and Control CommunitiesIndividuals enrolled in round 1 and still resident in the study communities were considered eligible for participation in round 2.\n\nBODY.RESULTS.BASELINE DATA:\nHIV-1 prevalence was higher in the intervention communities than in the control communities (24% versus 21%, risk ratio 1.13 [95% CI 1.05–1.22], p = 0.001). T. vaginalis infection, secondary school education, and spatial mobility were more common in the control communities, whilst history of genital discharge and uptake of STI treatment and VCT services offered in the survey were low overall but more frequent in the intervention communities (Table 1). However, the differences in each case were small and were unlikely to be clinically meaningful. Table 1 Baseline Characteristics of the Study Populations \n\nBODY.RESULTS.OUTCOMES AND ESTIMATION:\nMedian follow-up per person was 2.9 y (range 1.4–3.9) and 3.0 y (range 1.5–4.1) in the intervention and control communities, respectively. In total, 4,052 individuals had 212 incident events of HIV-1 during 12,009 person-years at risk, giving an HIV-1 incidence rate of 1.77 per 100 person-years at risk. HIV-1 incidence was higher in communities with higher baseline HIV prevalence (IRR 11.49 [95% CI 1.80–73.40], p = 0.010), but this difference disappeared after adjustment for stratification by community type (p = 0.8). HIV-1 incidence was higher in the intervention communities than in the control communities overall, and in each community type, except in the forestry plantations where it was almost identical (Table 2). The difference was not significant after adjustment for sex, age group, and baseline HIV prevalence (IRR 1.27 [95% CI 0.92–1.75], p = 0.012). The observed coefficient of between-community variation was 0.14. Table 2 HIV Prevalence at Baseline and HIV Incidence and IRRs for Intervention Versus Control Communities Looking at outcome indicators for community members (rather than for communities—the unit of randomisation), self-reported STI symptoms were similar in both sets of communities (Table 3). Treatment for STI symptoms in males was effective more frequently in the intervention communities, with men in the intervention community in five of the six matched pairs reporting reduced symptom recurrence. However, more young women in the intervention than in the control communities had started sex, and reports of unprotected sex with a casual partner in the study period were more common in the intervention communities. No differences were observed in consistent condom use with regular partners between the two sets of communities. In the intervention communities, knowledge about HIV/AIDS was enhanced amongst men, and more respondents reported a close relative or family member with AIDS (sex- and age-adjusted prevalence odds ratio 1.22 [95% CI 1.05–1.42], p = 0.009). Slightly more women in the intervention communities reported that condom use within marriage was becoming acceptable, but a greater proportion of men agreed with the statement that \"condoms reduce the pleasure of sex\". Table 3 Biomedical, Sexual Behaviour, and Psychological Outcomes at Follow-up by Residence in the Intervention and Control Communities A total of 63,261 peer-education meetings were held, and 6.8 million condoms were distributed by the programme in the intervention communities (Table 4). Outputs increased over time as new communities entered the programme. However, owing to high inflation and economic decline, the micro-credit income-generating projects proved impossible to implement. We were able to obtain data on STI episodes treated at clinics in the 11 out of 12 study communities that reported cases to the administrative districts of Mutasa and Makoni. In the three intervention communities each in Mutasa and Makoni, STI cases fell by 66% and 51%, respectively, over the 3-y study period. Similar declines of 67% and 52% occurred at clinics in the four control communities in Mutasa and the one control community in Makoni. Coverage of training in syndromic STI management and systemic counselling for nursing staff was high (Table 4). Table 4 Summary of Service Statistics on Programme Output Most of the activities were targeted at high-risk groups. In the general population sample interviewed in the follow-up survey, 1,779 (35%) and 647 (13%) of 5,098 respondents reported attending an HIV/AIDS meeting and a programme meeting, respectively (Table 5). More respondents in the intervention communities than in the control communities attended an HIV/AIDS meeting (41% versus 28%, prevalence rate ratio 1.44 [95% CI 1.33–1.56], p < 0.001) and a programme meeting (20% versus 5%, 4.27 [95% CI 3.52–5.17], p < 0.001), and participation was higher among men than women (prevalence rate ratio 1.32 [95% CI 1.14–1.53], p = 0.002). Fewer women in the intervention communities had heard about HIV/AIDS from external sources or believed that STI drugs were available at their local clinics. Sixty-two (2%) out of 2,528 respondents in the control communities reported spending at least 1 d in the past month in the intervention communities; the equivalent number for respondents in the intervention communities visiting control communities was 70 (3%) out of 2,683. Table 5 Intervention Coverage \n\nBODY.RESULTS.ANCILLARY ANALYSES:\nIn exploratory analysis to assess where the intervention failed, we found that HIV-1 incidence was reduced in males (IRR 0.48 [95% CI 0.24–0.98], p = 0.044) who reported attending programme meetings, after adjustment for the targeting of activities to groups with high-risk behaviour (Table 6). Amongst men who reported one or more casual sexual partners in the past 3 y, fewer of those who attended meetings reported unprotected sex with these partners (prevalence odds ratio 0.45 [95% CI 0.27–0.75], p = 0.002). HIV-1 incidence was not associated with programme participation in women. Table 6 HIV Incidence and IRRs by Meeting Attendance and Sex \n\nBODY.DISCUSSION.INTERPRETATION:\nWe conducted a scientific trial of the feasibility and impact of an integrated community- and clinic-based HIV-1–prevention intervention. The income-generating projects apart, the intervention activities were feasible. The outputs of the programme were extensive with more than 63,000 meetings being conducted and almost 7 million condoms distributed by trained peer educators. Programme messages were considered relevant and realistic. Local STI treatment and counselling services were strengthened and promoted in accordance with the intervention protocol. For male participants, these activities improved HIV/AIDS knowledge, increased the effectiveness of STI treatment, increased consistent condom use with casual partners, and reduced HIV-1 incidence. However, the cluster-RCT results clearly show that the intervention had no positive impact at the community level and suggest possible detrimental effects on the onset of female sexual activity and condom use with casual partners over a 3-y timeframe. Did the cluster-RCT design fail to capture the true effect of the intervention? There are three possibilities: (1) inadequate statistical power; (2) insufficient follow-up; and (3) contamination of intervention within control communities. The study design provided adequate statistical power to detect a meaningful average reduction (40%) in HIV-1 incidence in the intervention versus the control communities over a 3-y observation period. In hindsight, an effect size of 40% was too optimistic and the study had insufficient power to detect a smaller effect. However, there was no trend in the results towards reduced HIV-1 incidence in the intervention communities. Largely due to migration, attrition was close to that anticipated in the study design and was comparable to other recent cohort studies [6,10,9,30]. Migrants had similar characteristics and sexual behaviour to non-migrants [29]. The results of the exploratory sub-group analysis generate the hypothesis that high-risk behaviour was reduced in males attending programme meetings but did not translate into a wider impact on HIV-1 incidence at the population level. Changes in core and bridge populations may take more time to reflect in the general population than was observed in the trial. However, a longer period of follow-up would have increased attrition, and the finding of a possible adverse effect at the population level meant that it would not have been ethical to continue with the same intervention. Future trials of behaviour-change interventions may need to include multiple rounds with phased recruitment and (where interim results are favourable) may need to consider phased intervention implementation. We minimised intervention contamination by selecting physically separated study communities, and movements between intervention and control communities were rare. However, a similar peer-education programme was implemented in one control community (small town), and HIV-1–prevention activity was considerable in all control communities that also had greater access to information from external sources. In some cases, programme messages (e.g., promotion of condom use) conflicted with those of other agencies working in the intervention communities. The effects of these other programmes could have limited our ability to detect a reduction in HIV-1 incidence caused by the current intervention. The absence of an observed effect of the intervention was not explained by differences in HIV-1 prevalence, sexual behaviour, STI cofactors, mobility, or socio-demographic composition at baseline. The earlier sexual debut in females and greater unprotected sex with casual partners seen in the intervention communities during the study period were not present at baseline but could reflect increased willingness to report high-risk behaviours in settings where there was more open discourse about HIV and AIDS. The peer-education programme could have had some effect for male but not for unmarried female participants. Preliminary findings from subsequent qualitative investigations indicate that, in the predominantly rural communities in which the study was conducted, poverty and the associated failure of income-generating projects meant that some peer educators were unable to maintain safer behaviours. Given their increased visibility within the community—intended to enhance their status and self-esteem and, thus, to reinforce their commitment to and role as models for behaviour change—they may, inadvertently, have served as negative role models and, thereby, may have contributed to the greater female early-age sexual activity. Free distribution of condoms by women still engaging in unprotected commercial sex led to their being poorly valued and reinforced their association with promiscuity.\n\nBODY.DISCUSSION.GENERALISABILITY OF FINDINGS:\nEpidemiological context can affect the impact of interventions [31], and structural obstacles can limit the pace and extent to which activities are implemented and the quality of these activities [32]. The HIV-1 epidemic stabilised in eastern Zimbabwe during the study period, with HIV-1 prevalence declining by 40%–50% in young adults [23]. This decline was accompanied by delayed sexual debut, reduced sexual partner change, and consistent condom use with casual partners [33,23]. Prevalence of syphilis, gonorrhoea, and Chlamydia is low, but non-curable herpes simplex virus type 2 remains common [22]. Risk reduction makes transmission more fragile, and an intervention could have a larger effect when set against secular behavioural changes [2]. Mathematical model simulations suggest that there would also be a greater chance of detecting a significant effect of the intervention even though there would be fewer seroconversions to power the calculation [34,35]. Structural obstacles to intervention implementation included HIV/AIDS mortality which disrupted the programme by claiming the lives of two programme coordinators and several of the nursing staff and peer-educators. Economic decline made the income-generating projects unfeasible and reduced the effectiveness of other components of the intervention. We believe that the coverage of the peer-education programme was satisfactory, given the focus on highly sexually active individuals. Meeting coverage could have been under-estimated in the survey since one-to-one discussions and activities at beer halls and other public places may not have been recognised as meetings by those present. However, the high level of spatial mobility limited the number of people who were reached at the required level of intensity and consistency, whilst national shortages of foreign currency restricted fuel and drug supplies, hampered attempts to extend community activities into the more remote rural areas, and disrupted the STI treatment programme in both the intervention and control communities. The intervention that we evaluated could have greater effect where an HIV-1 epidemic is younger, HIV-1 incidence is greater, local sexual networks are less diffuse, background STI control is weak, herpes simplex virus type 2 is less common, population mobility is lower, and/or the socio-economic climate is stable. We cannot rule out an effect of peer education in the urban intervention community since similar activities were implemented in the control community. Targeted peer education may work better in towns where bar-based sex work is more extensive. The absence of reduced HIV-1 incidence in farming estates reinforces doubts raised by the Harare factory workers study [8] concerning the efficacy of workplace peer education.\n\nBODY.DISCUSSION.OVERALL EVIDENCE:\nThese findings are important since the strategies evaluated—i.e., peer education, condom distribution, and syndromic STI management—are theory-based, have the potential for independent effects [11], and are widely promoted [36,37]. Syndromic STI management was effective in a nascent epidemic [6]. However, our disappointing findings echo those from recent trials [9,12] and emphasise the need for alternative strategies of behaviour-change promotion. Social marketing of condoms [14], larger poverty-alleviation programmes to reduce women's reliance on sex work—based on skills training and careful market research rather than on small-scale income-generating projects—and strategies which reach beyond high-activity core groups, such as the Popular Opinion Leader programme [38,39], and client-centred counselling [40], could be more viable and effective in reducing HIV-1 transmission in rural areas. Given the necessary economic conditions, unmarried women may still play a useful role in bar-based programmes since beer halls remain foci for high-risk behaviour [41,15].\n\nBODY.SUPPORTING INFORMATION.TRIAL REGISTRATION:\nThis trial has the registration number ISRNCT00390949 in the International Standard Randomized Controlled Trial Number Register. Found at: http://www.clinicaltrials.gov/ct/show/NCT00390949?order=1\n Protocol S1Protocol(35 KB DOC)Click here for additional data file. Protocol S2Revisions to Protocol(35 KB DOC)Click here for additional data file. Text S1CONSORT Checklist(48 KB DOC)Click here for additional data file. Text S2Ethical Approval, Information Letter, and Consent Forms(2.8 MB PDF)Click here for additional data file.\n\n**Question:** Compared to Baseline communities that received standard Government services what was the result of Baseline communities that received additional education and preventive measures on Self-reported STI symptoms?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
407
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Incisional hernia after upper abdominal surgery: a randomised controlled trial of midline versus transverse incision\n\n ABSTRACT.OBJECTIVES:\nTo determine whether a transverse incision is an alternative to a midline incision in terms of incisional hernia incidence, surgical site infection, postoperative pain, hospital stay and cosmetics in cholecystectomy.\n\nABSTRACT.SUMMARY BACKGROUND DATA:\nIncisional hernias after midline incision are commonly underestimated but probably complicate between 2 and 20% of all abdominal wall closures. The midline incision is the preferred incision for surgery of the upper abdomen despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia. A RCT was preformed in the pre-laparoscopic cholecystectomy era the data of which were never published.\n\nABSTRACT.METHODS:\nOne hundred and fifty female patients were randomly allocated to cholecystectomy through midline or transverse incision. Early complications, the duration to discharge and the in-hospital use of analgesics was noted. Patients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications such as fistula, wound dehiscence and incisional hernia after a minimum of 12 months follow-up.\n\nABSTRACT.RESULTS:\nTwo percent (1/60) of patients that had undergone the procedure through a transverse incision presented with an incisional hernia as opposed to 14% (9/63) of patients from the midline incision group (P = 0.017). Transverse incisions were found to be significantly shorter than midline incisions and associated with more pleasing appearance. More patients having undergone a midline incision, reported pain on day one, two and three postoperatively than patients from the transverse group. The use of analgesics did not differ between the two groups.\n\nABSTRACT.CONCLUSIONS:\nIn light of our results a transverse incision should, if possible, be considered as the preferred incision in acute and elective surgery of the upper abdomen when laparoscopic surgery is not an option.\n\nBODY.INTRODUCTION:\nThe rate of incisional hernia after midline incision is commonly underestimated but probably lies between 2 and 20% [1–5]. Thus, incisional hernia is a major postoperative problem. The treatment of incisional hernia is complicated by high rates of recurrences. Recently, in a randomised controlled trial published by Burger et al. [6], midline incisional hernia repair has been shown to be associated with a 10-year cumulative recurrence rate of 63 and 32% for suture and mesh repair, respectively. The midline incision is the preferred incision for surgery of the upper abdomen, despite evidence that alternatives, such as the lateral paramedian and transverse incision, exist and might reduce the rate of incisional hernia [7]. Various approaches to opening the abdomen have been advocated over time. The choice for a certain incision is dependent on the exposure necessary for the desired procedure to succeed. A midline incision, be it supraumbilical, infraumbilical or both, is an approach especially suited for emergency and exploratory surgery because of the quick and generous exposure that can be achieved within a few minutes [8, 9]. The avascular nature of the linea alba minimises blood loss during this procedure. A supraumbilical transverse incision may be utilised in case exposure of the upper abdomen is desired. During this incision, the damage inflicted to the segmental arteries and nerves is previously described as being minimal [10]. Previously, only one randomised controlled trial, comparing transverse and true midline incisions, has been published specifically addressing incisional hernia incidence [11]. To determine whether the use of a transverse incision is an alternative to a midline incision for open cholecystectomy in terms of incisional hernia incidence, surgical site infection, postoperative pain and hospital stay, this randomised controlled trial was performed. This trial was conducted in an era when laparoscopic cholecystectomy was not yet available. The possibility of low incisional hernia rates after transverse incisions and the fact that little is known about potential advantages incited us to publish the relevant results of this randomised controlled trial which has been performed in the past and has only been reported in a Dutch thesis by one of the authors (H.L.). The primary endpoint of this study was the incisional hernia incidence after 12 months of follow-up. Secondary endpoints included pain and cosmetic appearance.\n\nBODY.METHODS.PROTOCOL:\nSome 150 consecutive female patients were randomly assigned to a midline or transverse incision as an approach for elective cholecystectomy or combined cholecystectomy and cholangiography (with or without consecutive choledochotomy) (75 and 75 patients, respectively). Emergency procedures were excluded from participation. The sample size is based on an incisional hernia rate reduction from 20 to 6% at a power of 80% and an error rate of 5%. Obtaining informed consent was conducted in accordance with the ethical standards of the Helsinki Declaration of 1975. The investigation reported was performed with informed consent from all of the patients and followed the guidelines for experimental investigation with human subjects and was approved by the medical ethics committee. An independent statistician prepared closed, tamper-proof envelopes containing the random allocation (Fig. 1). Patients were randomised for one of the procedures in theatre through the opening of the envelopes.Fig. 1Flow chart of patient inclusion and follow-up Patient-related factors that were recorded were age, body mass and length and date of operation. Operation-related factors that were recorded were the exact nature of the operation, length of the incision, the thickness of the subcutaneous fat, surgeon performing the procedure, as well as the duration of the operation (skin-to-skin time). In the immediate postoperative period, the use, dose and type of analgesics was recorded and a pain score was administered. The use of analgesics (morphine 7.5 mg intra-muscular injection, 4 h minimum interval between consecutive injections) was monitored for 48 h after surgery; the pain score was administered for the first 6 days after surgery. In patients assigned to surgery through a midline incision, the skin was incised from just below the xyphoid process to just above the umbilicus. The abdominal wall was opened in the midline by incising the linea alba. A Collin type (two-bladed) self-retaining retractor was used to maintain exposure. The abdominal wall was closed in one layer using single polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). The skin was consequently closed using running monofilament nylon sutures (Ethilon; Ethicon, Amersfoort, The Netherlands). Patients randomised for a transverse incision received a right-sided unilateral transverse incision between 3 and 4 cm below the costal margin. The rectus muscle was incised. The fibres of the external and internal obliques and the transverse muscles were separated in the direction of their course. Exposure was achieved through the use of a manually held single-bladed retractor. Closure of the abdominal wall was achieved by closure of the peritoneum and the posterior rectus fascia using a continuous, polygalactin 910 suture (Vicryl; Ethicon, Amersfoort, The Netherlands). The anterior rectus sheath and the fascia of the internal and external transverses were closed using simple interrupted polygalactin 910 sutures (Vicryl; Ethicon, Amersfoort, The Netherlands). Towards the end of both procedures, a Redon low-vacuum drain catheter was placed, which was guided outside the abdominal cavity approximately 5 cm from the incision. The skin was consequently closed using continuous monofilament nylon suture (Ethilon; Ethicon, Amersfoort, The Netherlands). All patients received a dose of 5,000 IU of sodium–heparin on the morning of the procedure as thrombosis prophylaxis.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nThe Pearson χ2 test was used for comparing percentages. In case of small expected numbers, a Fisher's exact test was performed. Continuous variables were analysed using the Mann–Whitney test. A P-value of 0.05 or less (two-sided) was considered to be statistically significant. Means and medians are expressed ±standard deviation (SD).\n\nBODY.METHODS.FOLLOW-UP:\nPatients returned to the surgical outpatient clinic for evaluation of the cosmetic results of the scar and to evaluate possible complications, such as fistula, wound dehiscence and incisional hernia, after a minimum of 12 months follow-up. The patient and the surgeon evaluated the cosmetic results independently and were asked to rate the scar as unsatisfactory, satisfactory or fine. Furthermore, the length and width of the scar was measured.\n\nBODY.RESULTS.STUDY GROUP:\nSome 150 consecutive patients were randomised for participation in this study during an inclusion period from April 1977 until July 1979. Seventy-five patients received a transverse incision and 75 patients a midline incision (Fig. 1). One patient was withdrawn from further follow-up after developing peritonitis and consequent acute respiratory distress syndrome (ARDS) not related to the closure of the abdominal wall 2 days after surgery (transverse incision group). The patients' average age was 51.9 and 51.4 years for the midline and the transverse incision groups, respectively. Furthermore, no differences were found in the body mass and average length between the two groups (Table 1). A cholecystectomy was performed using a transverse incision in 52 patients and utilising a midline incision in 52 patients also. Fifteen and 16 patients, respectively, underwent a combined cholangiography/cholecystectomy. A further 7 and 6 patients, respectively, were treated with a cholangiography/cholecystectomy plus additional choledochotomy and the postexploratory placement of a T-tube.Table 1Baseline characteristics of the patients undergoing surgery, according to study groupVariableMidline incisionTransverse incisionn = 75n = 74Average age (years) ± SD51.9 ± 14.851.4 ± 13.8Average weight (kg) ± SD71.3 ± 14.568 ± 14.3Average length (cm) ± SD163.5 ± 7.8164 ± 7.3\n\nBODY.RESULTS.SURGEON:\nStaff surgeons performed 17% (13/75 patients) of all procedures performed through a midline incision. The remainder of the procedures through a midline incision was carried out under staff surgeon supervision. Staff surgeons performed 14% of all procedures in the transverse incisions study group (10/74 patients) and supervised the remainder. No statistically significant difference was found between the two randomised groups (P = 0.65).\n\nBODY.RESULTS.DURATION OF SURGERY:\nNo significant difference was noted in the skin-to-skin time (in min) for the two different incisions (Table 2). Surgery utilising midline and transverse incision took 56.9 ± 29.3 and 53.2 ± 26.8 min, respectively (P = 0.35). The total duration of the procedures until extubation (in min) did not differ between the midline and transverse incisions (71.0 ± 30.5 and 67.0 ± 27.3, respectively, P = 0.34).Table 2Length of incision, thickness of subcutaneous fat and skin-to-skin time, according to study groupVariableMidline incisionTransverse incisionP-valueLength of incision (mm) ± SDa164 ± 28140 ± 24<0.0001Thickness of subcutaneous fat (mm) ± SDa34.5 ± 13.030.3 ± 12.40.05Skin-to-skin time (min) ± SDa56.9 ± 29.353.2 ± 26.80.40Width of scar (mm) ± SDb8.3 ± 1.43.3 ± 1.2<0.0001aMeasured during surgery in 75 midline and 74 transverse incisionsbMeasured at follow-up in 63 and 60 midline and transverse incisions, respectively\n\nBODY.RESULTS.PAIN AND ANALGESICS:\nSignificantly more patients, having undergone a midline incision, reported pain on day one, two and three postoperatively (P < 0.0001, Table 3). In the midline incision group, 28/75 patients required no or only one dose of analgesics; the remainder required two or more doses. Thirty-one patients operated through a transverse incision required no analgesics or only one dose; 43 patients (the remainder) required two or more. No significant difference in the use of analgesics was found between the groups (P = 0.69).Table 3Postoperatively reported pain, according to study group, shown as the number of patients reporting pain at the time points indicated (percentage), with the remainder of patients reporting no painTime point after surgeryMidline incision n = 75Transverse incision n = 74P-valuePatients reporting pain, n (%)Patients reporting pain, n (%)3–4 h68 (91)60 (81)0.09First day64 (85)39 (53)<0.0001Second day57 (76)23 (31)<0.0001Third day28 (37)9 (12)<0.0001Fourth day5 (7)3 (4)0.72Fifth day0 (0)1 (1)0.50Sixth day0 (0)1 (1)0.50\n\nBODY.RESULTS.COMPLICATIONS:\nPostoperative complications (Table 4) were seen in 16 out of 75 patients (21%) from the midline incision group and in 15% from the transverse incision group (11 patients) (P = 0.30). Briefly, one patient in each group developed cardiac complications; 8 and 6 patients developed urinary retention after the midline and transverse incisions, respectively (P = 0.59). Surgical site infections were diagnosed in 7 and 3 patients, respectively (P = 0.33).Table 4Rate of complications after surgery, according to study group, shown as the number of patients diagnosed with complications (percentage)ComplicationMidline incisionTransverse incisionP-valuen = 75 n (%)n = 75 n (%)Cardiac1 (1)1 (1)1Urinary retention8 (12)6 (8)0.59ARDS01 (1)0.50Surgical site infection7 (9)3 (4)0.33Haemorrhage1 (1)00.50Pneumonia01 (1)0.50Total17 (23)12 (16)0.30\n\nBODY.RESULTS.DISCHARGE:\nForty-five (60%) and 42 (57%) patients from the patients having undergone a midline or a transverse incision, respectively, were discharged on day 6 or 7 postoperatively. The remaining patients from each group left hospital care on day 8 or later. The duration of hospital admission did not differ between the two types of incision (P = 0.74).\n\nBODY.RESULTS.COSMETICS:\nThe width and length of all incisions was measured during the follow-up visit (Table 2). The mean width of the scar after the healing of the midline incisions was found to be 8.3 ± 1.4 mm. The mean width of the scar after the healing of the transverse incisions was measured to be 3.3 ± 1.2 mm. This observed difference is significant (P < 0.0001). The length of the incisions was 140 ± 24 mm and 164 ± 28 mm for the transverse and the midline incisions, respectively. The difference in scar length was found to be significant (P < 0.0001).\n\nBODY.FOLLOW-UP:\nEighty-one percent of all patients operated through a transverse incision were seen during the follow-up examination (n = 60). Of the patients operated through a midline incision, 63 out of 75 were seen at the outpatient clinic (84%). The patients that were lost to follow-up could either not be traced or had deceased (Fig. 1). The minimum follow-up for the evaluation of cosmetic results and hernia incidence was 12 months and the maximum was 36 months.\n\nBODY.FOLLOW-UP.INCISIONAL HERNIA:\nFrom the patients that had undergone the procedure through a transverse incision, one (1/60; 2%) presented with an incisional hernia as opposed to 9 patients from the midline incision group (9/63; 14%); 95% confidence interval (CI) 7.5–25.4%. This difference in hernia incidence is significant (P = 0.017). No significant correlation was found between the incisional hernia rate and surgical site infection (P = 0.07).\n\nBODY.FOLLOW-UP.SUBJECTIVE COSMETICS:\nPatients and surgeons alike were asked to rate the appearance of the scar during the postoperative follow-up outpatient clinic visit. Both the surgeons and the patients found the scar resulting from the transverse incision to be more cosmetically pleasing (P < 0.0001 and P = 0.03, respectively, Table 5).Table 5Number of patients and surgeons rating the cosmetics of a scar at follow-upScoreMidline incision (n = 63)Transverse incision (n = 60)Patients, n (%)Surgeons, n (%)Patients, n (%)Surgeons, n (%)Unsatisfactory6 (10)25 (40)2 (3)6 (10)Satisfactory16 (25)27 (43)9 (15)12 (20)Fine41 (65)11 (17)49 (82)42 (70)Total63636060Difference between type of incision: patients P = 0.03; surgeons P < 0.0001\n\nBODY.DISCUSSION:\nThis prospective randomised study of transverse and midline incisions for open cholecystectomy shows that a significant reduction of incisional hernia incidence can be achieved through the use of a transverse incision. Only one other study (published in 1980) reported the incidence of incisional hernia after upper abdominal midline and unilateral transverse incision in a randomised trial. No difference between the two techniques (8 and 6% incisional hernia, respectively) was found, but the relatively short follow-up of 6 months, however, may be held accountable for this finding [11]. Three retrospective studies showed rates of incisional hernia of 3.2, 5.4 and 16.5% for midline incision and 1.3, 6.7 and 13.4% for transverse incision without statistically significant differences [12–14]. The possible reason for the rather high incidence of incisional hernia in the midline incision group (14%) may lie in the use of resorbable 910 polygalactin sutures. Nevertheless, the use of the same type of resorbable suture in the closure of the transverse incisions resulted in a 2% hernia rate. There is evidence for the importance of proper technique and choice of incision as a means to reduce incisional hernia being more important than the use of suture material [7]. Furthermore, as mentioned above, it is known that the incidence of incisional hernia in the case of a midline incision lies between 2 and 20%. From our data, the NNT (numbers needed to treat) is calculated to be 8 (95% CI 5–30) and the RRR (relative risk reduction) is 88% (95% CI 23–100%). Luijendijk et al. [15] have published a hernia rate of 2% after Pfannenstiel incisions closed using 910 polygalactin, which is in agreement with our findings in the patients randomised for a transverse incision, emphasising the importance of the incision over the choice of suture material. In our study, significantly fewer patients reported pain on day 1, 2 and 3 after transverse incisions, a result that was also described by other authors [16, 17]. Greenall et al. [18] published a contradictory report (in 1980) in which no significant difference in postoperative pain was found between midline and transverse incisions. The previously mentioned study, however, only analysed 46 out of 572 patients (8%) with regard to pain, which may explain the finding. In the same way, Lacy et al. suspended visual analogue pain scoring in a study comparing midline and transverse incisions for abdominal aortic surgery. Remarkably, the two groups in our study did not differ in terms of postoperative analgesia, a finding that is also reported by Lindgren et al. [17] and Donati et al. [19]. In our study, surgeons as well as patients were significantly more satisfied with the aesthetic appearance after a transverse in comparison with a midline incision. The scars after transverse incisions were found to be significantly shorter and less wide than the midline incisions, which may account for the observed difference. A possible reason for this is that a transverse incision is executed parallel to the prevailing direction of the skin lines on the abdomen and, therefore, the tension on the wound and consequent scar is low. Cholecystectomy has come a long way since this trial. The introduction and widespread acceptance of laparoscopic technique as the treatment of choice has rendered open cholecystectomy to be an operation for exceptional, and perhaps surgically difficult, circumstances. Nowadays, the study reported is hardly feasible, yet, the results are still applicable and very relevant for other surgical procedures in the (upper) abdomen. Knowledge of the favourable results of a transverse incision may aid surgeons in their choice when finding themselves in the unfortunate position of needing conversion to open cholecystectomy. In conclusion, this investigation on transverse incisions might be helpful in reducing the incidence of incisional hernia in patients after open cholecystectomy. The midline incision is a preferred manner to achieve exposure of the abdominal cavity and is considered to be easily performed and quick. Although the midline incision is generally accepted, the incidence of incisional hernias is surprisingly high [1–5]. The choice for a particular incision should not only be based on exposure, but also on hernia incidence reduction, especially since recurrence rates after hernia repair are reported to be very high. Furthermore, the recurrence rate after incisional hernia repair is a disappointing 63 and 32% for suture and mesh repair, respectively [6]. In the light of these results, incisional hernia prevention is warranted. In this investigation, it is shown that a significant reduction (from 14.5 to 1.7%) of incisional hernia incidence was achieved by using a transverse incision. Hence, a transverse incision should be considered as the preferred incision in acute and elective surgery of the upper abdomen in which laparoscopic surgery is not an option. Full exposure of two quadrants is feasible through the use of a unilateral transverse incision in, for example, biliary, bariatric, liver and colonic surgery. The transverse incision should be part of the abdominal surgeon's armamentarium and is a preferable incision to prevent the high incidence of incisional hernia after abdominal surgery.\n\n**Question:** Compared to Midline incision what was the result of Transverse incision on Better cosmetic appearance of the incision as subjectively assessed by patients?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
226
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A heart failure self-management program for patients of all literacy levels: A randomized, controlled trial [ISRCTN11535170]\n\n ABSTRACT.BACKGROUND:\nSelf-management programs for patients with heart failure can reduce hospitalizations and mortality. However, no programs have analyzed their usefulness for patients with low literacy. We compared the efficacy of a heart failure self-management program designed for patients with low literacy versus usual care.\n\nABSTRACT.METHODS:\nWe performed a 12-month randomized controlled trial. From November 2001 to April 2003, we enrolled participants aged 30–80, who had heart failure and took furosemide. Intervention patients received education on self-care emphasizing daily weight measurement, diuretic dose self-adjustment, and symptom recognition and response. Picture-based educational materials, a digital scale, and scheduled telephone follow-up were provided to reinforce adherence. Control patients received a generic heart failure brochure and usual care. Primary outcomes were combined hospitalization or death, and heart failure-related quality of life.\n\nABSTRACT.RESULTS:\n123 patients (64 control, 59 intervention) participated; 41% had inadequate literacy. Patients in the intervention group had a lower rate of hospitalization or death (crude incidence rate ratio (IRR) = 0.69; CI 0.4, 1.2; adjusted IRR = 0.53; CI 0.32, 0.89). This difference was larger for patients with low literacy (IRR = 0.39; CI 0.16, 0.91) than for higher literacy (IRR = 0.56; CI 0.3, 1.04), but the interaction was not statistically significant. At 12 months, more patients in the intervention group reported monitoring weights daily (79% vs. 29%, p < 0.0001). After adjusting for baseline demographic and treatment differences, we found no difference in heart failure-related quality of life at 12 months (difference = -2; CI -5, +9).\n\nABSTRACT.CONCLUSION:\nA primary care-based heart failure self-management program designed for patients with low literacy reduces the risk of hospitalizations or death.\n\nBODY.BACKGROUND:\nLimited literacy skills are common among adults in the United States [1]. Low literacy is associated with increased risk of hospitalization and worse control of chronic diseases [1-4]. Heart failure is a common chronic illness requiring multiple medications and significant self-care. Heart failure is the leading cause of hospitalization in the Medicare population [5]. The complexity of care for heart failure puts people with low literacy at considerable risk for adverse outcomes including hospitalization, worse quality of life, and mortality. Heart failure disease-management interventions appear effective in reducing rehospitalizations and improving quality of life [6]. Most randomized clinical trials of heart failure disease management completed over the last 10 years have enrolled patients during, or shortly after, hospitalization and reported the outcome of readmission [6]. Although the designs of these programs vary, several have tested education and support to enhance patient self-management as the main component of the intervention [7-10]. The content of self-management education usually includes teaching to understand medications, reduce salt intake, monitor daily weights, and recognize symptoms. Most programs include structured follow-up either by home visit, phone, or mail. Only a few, uncontrolled studies specifically ask patients to self-adjust their diuretics [11,12]. Heart failure self-management programs may be particularly effective for vulnerable populations, such as those with poor literacy [13,14]. However, to our knowledge, no previous studies have explicitly examined the role of self-management programs in a low literacy population. A recently published study and accompanying editorial suggested that such self-management support may be most effective among vulnerable populations [13,14]. Low literacy may represent a vulnerability for which we should design our programs. Disease management for patients with low literacy may require refined approaches to foster self-management skills. We developed a heart failure self-management program for use by patients with a variety of literacy levels [15]. We performed a randomized controlled trial comparing our self-management program to usual care among outpatients to test if the program could reduce hospitalizations and improve heart failure-related quality of life.\n\nBODY.METHODS.STUDY DESIGN:\nWe conducted a randomized controlled trial in the University of North Carolina (UNC) General Internal Medicine Practice, which serves a wide socioeconomic range of patients. The practice, staffed by over 20 attending faculty and 70 medical residents, cares for over 500 patients with heart failure.\n\nBODY.METHODS.STUDY PARTICIPANTS:\nTo be eligible, patients had to have a clinical diagnosis of heart failure confirmed by their primary provider through a direct interview, and one of the following: 1) chest x-ray findings consistent with heart failure, 2) ejection fraction <40% by any method, or 3) a history of peripheral edema. They also had to have New York Heart Association class II-IV symptoms within the last 3 months. Patients were excluded if they had moderate to severe dementia (based on the treating physician's clinical judgment), terminal illness with life expectancy less than 6 months, severe hearing impairment, blindness, current substance abuse, a serum creatinine >4 mg/dl or on dialysis, a requirement of supplemental oxygen at home, lacked a telephone, or were scheduled to undergo cardiac surgery or awaiting heart transplant. We did not exclude patients on the basis of literacy skill because we felt patients of all literacy levels would benefit from this intervention. Patients who read well often prefer information presented in an easy-to-read format [16]. We accepted referrals from the cardiology clinic if patients met eligibility criteria. This study was approved by the Institutional Review Board of the UNC School of Medicine, and all participants gave informed consent prior to enrollment. For participants who could not adequately read the informed consent document, the research assistant read and explained it to them. They were asked to sign a short form indicating that the informed consent document was reviewed and they agreed to participate. When the short form was used, a witness was asked to attest to the adequacy of the consent process.\n\nBODY.METHODS.PROCEDURES:\nParticipants were recruited between November 2001 and April 2003 from the General Internal Medicine and Cardiology Practices at UNC Hospitals. A trained research assistant screened all patients age 30–80 for use of furosemide. If the patient was on furosemide, their physician was queried about the presence of heart failure. If the patient met eligibility criteria and consented to participate, baseline data were collected. We then randomized patients by concealed allocation based on a random number generator to receive the intervention or usual care. All patients were followed for one year. All data were collected in the General Internal Medicine Practice.\n\nBODY.METHODS.INTERVENTION:\nThe intervention was delivered in the General Internal Medicine Practice. The educational materials and disease management intervention were previously described in detail, and the intervention is summarized here [15]. The intervention began with a 1-hour educational session with a clinical pharmacist or health educator during a regular clinic visit. Patients were given an educational booklet designed for low literacy patients (written below the 6th grade level and extensively pre-tested in focus groups and a pilot study [15]) and a digital scale. The educator and patient reviewed the booklet together, including management scenarios. As part of the educational session, patients were taught to identify signs of heart failure exacerbation, perform daily weight assessment, and adjust their diuretic dose. Because this intervention was aimed at patients with low literacy, the health educator used pedagogic strategies felt to improve comprehension for patients with low literacy [17]. For example, the educator had the patient teach back the information [18], engaged the patient in filling out the notebook, and used brainstorming to help the patient incorporate self-management into their lives. The educator, patient, and primary care physician collaborated to establish the patient's \"good weight\" (i.e., where the patient's heart failure was stable) and baseline diuretic dose. The educator then filled in the management plan in the patient's notebook to help the patient better manage weight fluctuations and self-adjust the diuretic dose based on weight (Figure 1). The general plan involved doubling the dosage if weight went up and halving it if weight went down. The program coordinator then made scheduled follow-up phone calls (days 3, 7, 14, 21, 28, 56) and monthly during months 3–6. The follow-up phone calls, each lasting 5–15 minutes, were designed to reinforce the educational session and provide motivation for the patients. Again, the program coordinator had the patient describe their self-management practices and offered feedback to improve them. Patients experiencing worsening symptoms were scheduled acute visits with their physician. We did not provide specialized nursing assessment, care or medication advice beyond diuretic dosing. If the patient's doctor determined that the good weight had changed, the program coordinator would revise the care plan with the patient. Patients enrolled in the control group received a general heart failure education pamphlet written at approximately the 7th grade level, and continued with usual care from their primary physician. The only contacts between the research team and the control patients were at enrollment and data collection.\n\nBODY.METHODS.MEASURES:\nWe assessed outcomes at 6 and 12 months through in-person interviews and review of the medical record. To be sensitive to low literacy, all interviews were conducted verbally by a trained research assistant. If patients were unable to come to clinic for the interview, it was conducted by phone. The research assistant was not blinded to the patient's study group. Primary outcomes were death or all-cause readmission and heart failure-related quality of life at the end of 12 months. Data on hospitalization dates were obtained from the patient and confirmed by medical chart review. All admissions, regardless of hospital or cause, were counted. For exploratory analyses, we classified reason for admission as cardiac or non-cardiac. Cardiac admissions included those primarily for heart failure (e.g., shortness of breath and edema relieved by diuresis) and other cardiac causes such as chest pain, arrhythmias, or syncope. Cause of admission was determined by chart review by one of the authors (D.D.) who was blinded to treatment allocation. Heart failure-related quality of life was assessed using a modified version of the Minnesota Living with Heart Failure Questionnaire (MLHF). The MLHF is a 21 question instrument with a 6-point Likert scale response option and scores ranging from 0 to 105 [19]. In pilot testing of the MLHF, we found that low literacy patients had trouble answering questions with the standard 6-point Likert scale [15], so we changed the response scale to 4 points, using a visual display with stars to represent increasing severity. The 4-point Likert scale was scored as 0, 1, 3, and 5 to approximate standard scores on the MLHF. Secondary measures included heart failure self-efficacy, knowledge, and behaviors. Self-efficacy was measured with an 8 item scale developed for the behaviors needed in this trial as suggested by self-efficacy scale developers [20]. Respondents used a 4-point Likert scale yielding a total score from 0–24. We assessed heart failure knowledge using a knowledge test previously developed for this population [15], Heart failure self-management behavior was assessed by asking patients how often they weighed themselves. We used patient self-report and the medical chart to measure age, gender, race, insurance status, income, years of education, medication use, years with heart failure, and the presence of co-morbidities. We measured literacy using the Short Test of Functional Health Literacy in Adults (S-TOFHLA) [21], a well-validated scale that correlates well with other measures of reading ability [22]. Patients who scored in the inadequate literacy range on the S-TOFHLA were considered to have \"low literacy.\" The cut-point for inadequate literacy is roughly analogous to the 4th grade reading level.\n\nBODY.METHODS.SAMPLE SIZE:\nSample size was based on pilot study results showing a 9-point improvement in MLHF scores over 3-months with the intervention [15]. Detecting a 9-point difference between intervention and control group with 80% power and alpha set at 0.05 required 70 patients per group. We aimed to enroll 150 patients to account for possible attrition, but stopped at 127 because funding ended. We did not power this study to detect differences in hospitalization, but studies with even smaller numbers of patients have shown a difference for that outcome [7].\n\nBODY.METHODS.STATISTICAL METHODS:\nPatients who did not return any phone calls and did not return for follow-up assessment did not have outcome data for analysis. Patients who withdrew from the study were censored at the time of withdrawal; any data collected prior to withdrawal were included in the analysis. Baseline differences between groups were assessed using t-tests for continuous outcomes and chi-squared tests for categorical outcomes. For MLHF, heart failure self-efficacy and heart failure knowledge, we used two-sample t-tests. Non-parametric tests were also performed for all comparisons, but results did not differ from the parametric tests. Because of the small sample size and the unequal distribution of baseline characteristics, we adjusted for baseline differences using linear regression. Analyses of self-reported outcomes, such as MLHF, excluded patients who died or withdrew from the study before 6 or 12 month data was collected. For hospitalization or death, we used negative binomial regression and censored patients at the time of death or withdrawal from the study. Based on the likelihood ratio test, negative binomial regression was a better fit for the data than a Poisson regression. Additionally, the Vuong test confirmed that a zero-inflated model was inappropriate [23]. Because of uneven distribution of baseline characteristics, we performed analysis of covariance with negative binomial regression to control for baseline differences [24]. We identified the following variables that could contribute to hospitalization or death based on previous studies: age, race, gender, literacy level, hypertension, diabetes, creatinine, MLHF score, use of β-blockers, angiotensin converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), use of digoxin, and presence of systolic dysfunction [7,25]. Variables were not included in the model if the relationship between the variable and the study group or outcome had a p value greater than 0.3. We started with a model including the following items to arrive at the best point estimate: age, gender, hypertension, creatinine, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs. We then eliminated variables from the model if p > 0.30, and if the point estimate remained within 10% of the initial estimate. We prespecified a subgroup analysis in patients with low literacy to analyze if the intervention had a similar effect. The same analysis described above was repeated for the subgroups of patients with low literacy and those with higher literacy. The initial multivariate model for the subgroups analysis included: age, gender, hypertension, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs.\n\nBODY.METHODS.ROLE OF THE FUNDING SOURCE:\nThe funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.\n\nBODY.RESULTS.PATIENTS:\nWe screened 919 patients on furosemide between November 2001 and April 2003. 127 met eligibility criteria and agreed to participate (Figure 2). Of those not enrolled, 407 did not have heart failure according to their physician, 367 did not meet eligibility criteria and 27 declined to participate. Of those randomized to the control group, 1 never returned after the first visit, 1 withdrew during the study and 4 died during the study. Follow-up was completed for all of the remaining participants (98%) (Figure 3). Of those randomized to the intervention group, 3 never returned after the first visit, 4 withdrew during the study and 3 died during the study. Follow-up was completed for all of the remaining participants (93%). At baseline, most characteristics were similar between the two groups (Table 1). However, the control group had more participants with hypertension, fewer with diabetes, and fewer men. Of heart failure related characteristics, the control group had more participants with systolic dysfunction, and taking β-blockers. The intervention group had more participants taking ACE inhibitors or ARBs, and digoxin. Regardless of these differences, none were statistically significant. The control group did have statistically significantly higher baseline MLHF scores representing worse symptoms at baseline.\n\nBODY.RESULTS.HOSPITALIZATION OR DEATH:\nThere were 68 hospitalizations (65) or deaths (3) in the intervention group and 111 (107 hospitalizations, 4 deaths) in the control group. The crude all-cause hospital admission or death incidence rate ratio (IRR) was 0.69 (95% CI 0.40, 1.19). After adjusting for age, gender, use of ACE inhibitor or ARB, use of a β-blocker, presence of hypertension, and baseline MLHF, intervention patients were less likely to have the outcome (IRR = 0.53; 95% CI 0.32, 0.89). 61% of patients in the control group had at least one hospitalization or died, and 42% of patients in the intervention group had at least 1 hospitalization or died (p = 0.13).\n\nBODY.RESULTS.CARDIAC HOSPITALIZATION:\n39% of patients in the control group and 34% of patients in the intervention group had at least one hospitalization for cardiac causes (p = 0.55). The unadjusted IRR was 0.79 (95% CI 0.42, 1.5). After adjusting for baseline differences, the IRR was 0.85 (95% CI 0.44, 1.7).\n\nBODY.RESULTS.HEART FAILURE-RELATED QUALITY OF LIFE:\nIn unadjusted analysis, the control group, on average, improved 5 points on the MLHF and the intervention group improved by 1 point. The difference was not statistically significant (3.5 points, 95% CI 11, -4, p = 0.36). After adjusting for baseline differences between the groups, the difference was 2 points (95% CI 9, -5, p = 0.59) suggesting no effect on heart failure-related quality of life.\n\nBODY.RESULTS.OTHER OUTCOMES.KNOWLEDGE:\nHeart failure related knowledge improved more in the intervention group than in the control group. Mean difference in score improvement was 12 percentage points (95% CI 6, 18; p < 0.001).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-EFFICACY:\nHeart failure self-efficacy improved more in the intervention group than in the control group. Mean difference in score improvement was 2 points (95% CI 0.7, 3.1; p = 0.0026).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-CARE BEHAVIOR:\nSignificantly more patients in the intervention group than in the control group reported daily weight measurement at 12 months (79% vs. 29%, p < 0.001).\n\nBODY.RESULTS.SUBGROUP ANALYSES ACCORDING TO LITERACY:\nTwenty-four patients in each group had inadequate literacy based on the S-TOFHLA (Table 2). Among these patients, there was no difference in quality of life score in unadjusted and adjusted analyses (difference = -1.6; 95% CI -15, 12); p = 0.81). For the combined outcome of hospitalization or death, the unadjusted IRR was 0.77 (95% CI 0.30, 1.94). After adjusting for baseline differences, the IRR was 0.39 (95% CI 0.16, 0.91). Seventy-five patients had marginal or adequate literacy based on the S-TOFHLA. We found no difference in quality of life score in unadjusted and adjusted analyses (difference = -4.2; 95% CI -14, 6; p = 0.40). Among patients in the higher literacy group, the unadjusted IRR for hospitalization or death was 0.65 (95% CI 0.33, 1.3). After adjusting for baseline differences, the IRR was 0.56 (95% CI 0.30, 1.04). We did not find a statistically significant effect modification between literacy and the intervention.\n\nBODY.DISCUSSION:\nA heart failure self-management program designed for patients with low literacy reduced the rate of the combined endpoint of hospitalization or death. The prespecified subgroup analyses suggest that patients with low literacy benefited as much from the intervention as the patients with higher literacy. The success of our intervention reflects the goals of our program. We designed an easy-to-read and use educational booklet and self-management plan, and focused on overcoming barriers to learning self-management [15]. Our intervention was founded on teaching self-management. We focused on helping patients understand signs and symptoms of worsening heart failure and perform self-adjustment of diuretics based on weight fluctuation. Many care providers would not attempt to teach patients, particularly those with low literacy, how to self-adjust their diuretic medication. We found that, with careful teaching, many patients incorporated this strategy into their daily routine successfully, as demonstrated by improved self-care behaviors. Teaching self-adjustment of diuretics, rather than the conventional teaching to call the care provider if weight fluctuates, empowers patients to take more control over their illness. Self-adjustment of diuretic dose is a prominent aspect of the self-management training we provided to the intervention patients. Other programs to improve patient self-management have not been explicit in teaching patients to self-adjust their diuretic dose based on weight fluctuation. Although our outcomes are comparable to others', using this approach puts more control into the hands of the patient. Furthermore, our intervention appears effective among patients with low literacy skills, a group often overlooked for empowering interventions. Our study adds to the growing literature on disease management programs for patients with heart failure [6], particularly those that focus on self-management training [7-10]. Studies focusing on self-management training have demonstrated comparable improvements in hospitalization rates to more comprehensive programs that aim to improve the quality of pharmaceutical prescribing, provide home visits, and take place in specialized cardiology clinics [6]. Such comprehensive programs have also been shown to reduce mortality, but self-management programs have not [6]. We did not detect any difference in heart failure related quality of life which was the outcome we powered our study to detect. Other self-management studies that have found improved quality of life have enrolled patients during a heart failure hospitalization [8,9]; however, we enrolled patients in the outpatient setting while they were clinically stable. Improving quality of life for stable outpatients may be more difficult for this type of intervention. We have previously reported that patients with diabetes and low literacy benefited more from a disease management intervention than those with higher literacy skills [26]. A similar result in two different chronic diseases substantiates the claim that appropriately designed disease management programs may have greater effect for low literacy or vulnerable populations, who are most at risk for adverse outcomes with usual care. This finding is particularly important in light of the recent study by DeBusk and colleagues that did not find a reduction in hospitalization with a well-designed comprehensive intervention [13]. The authors and an accompanying editorial [14] suggested that the failure to detect improvement may have occurred because the patients studied were less at-risk than in other studies. They called for more research to determine better ways of targeting disease management. We believe that low literacy is an important marker for vulnerability to adverse outcomes, and that disease management programs targeted to patients with low literacy may be an effective way of focusing resources on those most able to benefit. If patients with low literacy are to be preferentially recruited for such programs, innovative outreach and screening efforts will likely be required, as patients with low literacy may face particular barriers to accessing such care. This study should be interpreted in light of its limitations. Research assistants were not blind to group assignment during the assessment of self-reported outcomes. As such, patients in the intervention may have been more likely to inflate their responses in an effort to please the interviewer. This effect would tend to inflate patient responses to the subjective assessments of heart failure-related quality of life, self-efficacy, and self-care behaviors. The MLHF questionnaire was modified from its original form to make it easier for patients with low literacy to respond. This change in the scale may have changed its ability to detect important changes in heart failure related quality of life. Because the groups' mean scores were almost identical, we do not feel this limitation changed our results. In a similar vein, most questionnaires are not validated in low literacy populations, raising questions as to their ability to perform to the same standards. Our sample size was small, which did not allow for an even distribution of baseline variables among the groups. We controlled for baseline differences between groups in our analysis. While it is controversial whether or not to control for baseline differences in randomized controlled trials, some analysts have argued that doing so improves the power without introducing bias [24]. A larger, multi-site study would offer better control of confounders, better generalizability, and more power to determine differences in effect according to literacy. We did not collect data on the resources needed to implement this type of intervention in usual settings, and such a study and cost-effectiveness analysis would be helpful for most interventions of this type. We used health educators, not nurses or physicians, to deliver the intervention. By using less highly trained individuals to deliver the intervention, we enabled nurses and physicians to perform clinical tasks more commensurate with their training. Other studies that have performed global cost estimates have found that the savings from reductions in hospitalizations exceed the cost of the intervention [6].\n\nBODY.CONCLUSION:\nIn conclusion, our heart failure self-management program, designed for patients of all literacy levels, appears to reduce rates of hospitalization and death. Patients with low literacy, and other vulnerable patients, may stand to benefit most from these programs. Further research into the design, implementation, and dissemination of disease management programs for low literacy patients will be crucial for meeting the health care needs of the growing population of patients with chronic illness.\n\nBODY.COMPETING INTERESTS:\nDrs. DeWalt and Pignone have received honoraria and grants from Pfizer, Inc., Dr. Rothman has received grants from Pfizer, Inc., and Dr. Sueta is in their speakers bureau.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nDD conceived of the study, participated in its design and coordination, performed statistical analyses, interpretation of the data, and drafted the manuscript. RM, MB conceived of the study and participated in its coordination. MK, KC coordinated the study, and collected the data, RR, CS participated in study design and interpretation of the data. MP conceived of the study, participated in its design and coordination, and interpretation of the data. All authors reviewed the manuscript for important intellectual content and gave final approval.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to Standard information about self-care what was the result of Follow-up and thorough education on self-care on Knowledge about heart failure?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
368
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Formoterol DPI 9 μg what was the result of Budesonide/formoterol pMDI 320/9 μg on Dyspnoea score improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
257
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Impact and Process Evaluation of Integrated Community and Clinic-Based HIV-1 Control: A Cluster-Randomised Trial in Eastern Zimbabwe\n\n ABSTRACT.BACKGROUND:\nHIV-1 control in sub-Saharan Africa requires cost-effective and sustainable programmes that promote behaviour change and reduce cofactor sexually transmitted infections (STIs) at the population and individual levels.\n\nABSTRACT.METHODS AND FINDINGS:\nWe measured the feasibility of community-based peer education, free condom distribution, income-generating projects, and clinic-based STI treatment and counselling services and evaluated their impact on the incidence of HIV-1 measured over a 3-y period in a cluster-randomised controlled trial in eastern Zimbabwe. Analysis of primary outcomes was on an intention-to-treat basis. The income-generating projects proved impossible to implement in the prevailing economic climate. Despite greater programme activity and knowledge in the intervention communities, the incidence rate ratio of HIV-1 was 1.27 (95% confidence interval [CI] 0.92–1.75) compared to the control communities. No evidence was found for reduced incidence of self-reported STI symptoms or high-risk sexual behaviour in the intervention communities. Males who attended programme meetings had lower HIV-1 incidence (incidence rate ratio 0.48, 95% CI 0.24–0.98), and fewer men who attended programme meetings reported unprotected sex with casual partners (odds ratio 0.45, 95% CI 0.28–0.75). More male STI patients in the intervention communities reported cessation of symptoms (odds ratio 2.49, 95% CI 1.21–5.12).\n\nABSTRACT.CONCLUSIONS:\nIntegrated peer education, condom distribution, and syndromic STI management did not reduce population-level HIV-1 incidence in a declining epidemic, despite reducing HIV-1 incidence in the immediate male target group. Our results highlight the need to assess the community-level impact of interventions that are effective amongst targeted population sub-groups.\n\nBODY.INTRODUCTION:\nHIV-1–prevalence declines may now be occurring in some sub-Saharan African countries [1]. However, there remains little direct evidence that prevention measures—rather than natural HIV-1 epidemic dynamics [2] or behaviour change prompted by mortality [3]—have contributed to the slowing of HIV-1 epidemics [4,5]. Syndromic management of sexually transmitted infections (STIs) proved effective early in an HIV-1 epidemic in north-west Tanzania [6]. Peer education to promote safe behaviours showed promise in early process evaluations [7], but a randomised controlled trial (RCT) of factory workers in Harare, Zimbabwe, done in the mid-1990s, proved inconclusive [8]. Subsequent RCTs of syndromic management [9] and mass treatment of STIs [10], together with an information, education, and communication (IEC) behaviour-change programme [9], showed no effect in more mature epidemics. Integrated implementation of synergistic community-based HIV-1 control strategies could be a more cost-effective and sustainable approach to HIV-1 prevention than parallel application of vertical (top-down) programmes [11]. One scientific evaluation of such a strategy has been reported in which a combination of IEC activities amongst the general population and syndromic STI management showed no impact on HIV-1 incidence at the population level [9], although participation in the IEC activities was associated with reduced HIV-1 infection in women [12]. We conducted a cluster-RCT to test the hypothesis that integrated implementation of combined community- and clinic-based HIV-1 prevention, in which IEC activities focus primarily on high-risk populations, can be feasible and effective in reducing HIV-1 incidence in a major maturing epidemic in eastern Zimbabwe (Protocols S1 and S2; Text S1 and S2).\n\nBODY.METHODS.PARTICIPANTS AND RANDOMISATION PROCEDURE:\nThe study communities comprised six pairs of communities matched by socio-economic type—small town, tea/coffee estate, forestry plantation, roadside trading settlement, and subsistence farming area (two pairs) (Figure 1). Each community included at least one Government or Mission health centre. It was anticipated that HIV-1 incidence would be similar within each pair of communities. Within each pair, one community was assigned at random (un-blinded coin toss by a Ministry of Health official witnessed by programme and research personnel) to receive the additional intervention and the other to be the control. These procedures were designed to ensure that Mission, non-governmental organisation, and private sector programmes (for details, please refer to the following section) would be distributed evenly between intervention and control sites. Figure 1Location of Intervention and Control Communities in Manicaland Province, Eastern Zimbabwe We assessed the effect of the intervention using results from laboratory tests for HIV-1 infection and questionnaire data collected in the baseline and 3-y follow-up rounds of a population-based, closed-cohort survey. The 12 study communities were enumerated in a phased manner, with paired communities being enumerated consecutively to minimise the effects of any seasonal factors. HIV-1–prevention activities were commenced in each intervention community shortly after completion of the baseline survey in that community. In each community, individuals eligible for the study were identified in the first round using data from household listings prepared in an initial census. All males and females aged 17–54 y and 15–44 y at last birthday (the age groups expected to have the highest incidence of HIV infection), respectively, who had slept in a household in the community for at least four nights in the previous month, and who had also done so at the same time 1 y earlier, were considered eligible for the study. In heterosexually driven HIV-1 epidemics, risk of infection can be correlated amongst marital partners [13]. Therefore, to maximise statistical power to detect differences in HIV-1 incidence, enrolment was restricted to one randomly selected member per marital group.\n\nBODY.METHODS.INTERVENTIONS:\nIntervention and control communities were to receive standard Government services including basic syndromic STI management, condom distribution from health clinics and Zimbabwe National Family Planning Council outlets, home-based care, and limited HIV/AIDS–focussed IEC activities (e.g., occasional AIDS-awareness meetings and distribution of posters and leaflets). In addition, social marketing of male and female condoms would be provided through an ongoing national programme [14]. The intervention comprised targeted and population-level strategies to promote safer sexual behaviour and to improve treatment of STIs that facilitate HIV-1 transmission. The intervention strategies were implemented by two local non-governmental organisations (Family AIDS Caring Trust and the Biomedical Research and Training Institute) and the Zimbabwe Ministry of Health and Child Welfare through an integrated programme of community- and clinic-based activities. Integration of the individual programme components was achieved through the joint involvement of the participating agencies in the planning and implementation of activities and through the inclusion of biomedical and behavioural aspects within each component. The programme design comprised three key components: (1) peer education and condom distribution amongst commercial sex workers and male clients at workplaces and in the general community, supported by income-generating projects; (2) strengthened syndromic management of STI services at local health centres; and (3) open days with HIV/AIDS IEC activities at health centres to promote safer sexual behaviour and to increase the uptake of local STI treatment services. The peer-education component was based on a model which had been developed by the Project Support Group at the University of Zimbabwe [7] and which had been widely implemented within Zimbabwe and neighbouring countries. Activities were held weekly at workplaces and at locations within the general community (e.g., beer halls and markets) where casual relationships were most frequently formed [15]. The target population comprised sex workers and male clients who form a bridge population in HIV transmission [16] between sex workers and the monogamous (or serial monogamous) majority of women [17,18]. It was posited that the high HIV-1 incidence observed amongst young women could be reduced by altering the behaviour of their older male partners whose own behaviour was intrinsically more risky [19]. The behavioural component would be reinforced in counselling sessions with STI patients and through micro-credit income-generating projects to reduce unmarried women's dependence on commercial sex work. The micro-credit scheme consisted of small interest-free loans repayable over 10 mo, provided to groups and to individuals together with training in small-business management. The targeted activities would be extended to the general population through open days held at local health centres. Besides providing basic HIV/AIDS information, it was envisaged that programme meetings and activities, by their continuous nature, would sustain high levels of awareness of the risks of HIV transmission and would facilitate renegotiation of community social norms, making safer behaviours easier to adopt. The key messages of the programme were: (1) remain faithful to one regular sexual partner; (2) use condoms consistently with any casual sexual partners; and (3) seek prompt and effective treatment for any STIs. Syndromic management of STIs at primary healthcare centres was first introduced in Zimbabwe in the 1980s [20] and formed the basis of STI diagnosis and treatment services at baseline in the intervention and control communities. It was envisaged that these services could be strengthened and made more effective through a programme of regular classroom training and on-site supervision of nursing staff, through the introduction of training in systemic counselling for STI patients, and through the provision of small quantities of treatment drugs to cover delays in routine supplies. Quality-assurance procedures applied in the intervention communities included pre- and post-training tests for peer educators and, for nursing staff, attending the syndromic STI management and systemic counselling courses, regular on-site supervision (including random spot checks) and training, refresher courses, routine planning meetings and monitoring of service statistics, and quarterly workshops where detailed programme procedures were reviewed and updated. An interim qualitative process evaluation of intervention activities was conducted during the inter-survey period, and a report on the findings was provided to the implementing organisations.\n\nBODY.METHODS.OUTCOME AND PROCESS MEASURES:\nThe primary outcome of the study was HIV-1 incidence at the community level amongst individuals who were uninfected at baseline. Blood was collected onto Whatman No. 3 filter paper and transported to the Biomedical Research and Training Institute laboratory in Harare. Blood spots were air dried at 4 °C and, for long-term (>1 mo) storage, were kept at −20 °C. For baseline studies, blood was eluted into phosphate-buffered saline, and antibodies to HIV were detected using a dipstick dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, http://www.path.org; produced locally in Thailand]) and a standard protocol [21,22]. All positive results and a 10% sample of negative results were confirmed using a plate EIA (Abbott Third-Generation HIV-1/HIV-2 EIA [http://www.abbott.com] or Genelavia MIXT HIV-1/HIV-2 [Sanofi Diagnostics Pasteur, Marnes La Coquette, France]). At follow-up, a similar protocol was followed. Only the samples from those participants recorded as being HIV seronegative at baseline were tested at follow-up, again using a dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, produced locally in India]). Where seroconversion was indicated, the frozen stored baseline sample was retested to confirm the original negative result using the same dot EIA test. Where the baseline result remained negative, the Abbott EIA test was used to confirm both baseline and follow-up results. The change in place of manufacture of the dot EIA and the exclusive use of Abbott test kits to confirm positive sera at follow-up was due only to changes in the supply of test reagents, and not to perceived changes in sensitivity or specificity [23]. Apart from the principal investigators (based in Harare, London and Oxford) and those nurses given permission by participants requesting voluntary counselling and testing (VCT), all research personnel remained blind to the HIV-1 status of individual participants. Secondary outcomes, measured at the community and individual level, were self-reported genital ulcers and urethral or vaginal discharge in the past year (STI cases), STI treatment effectiveness (self-reported cessation of symptoms), indicators of sexual and health-seeking behaviour change, and HIV/AIDS knowledge. The behaviour-change variables assessed were sexual debut, sexual partner change in the past year, non-regular partnerships in the past month, and unprotected sex with regular and casual partners in the past 3 y. The data on sexual partnerships and condom use were collected using the Informal Confidential Voting Interview method for 75% of respondents selected at random in the first round of the survey. This method includes procedures to build rapport, ensure a non-judgemental interview approach, and provide reassurance that there are no right or wrong answers to questions of a personal nature, and uses a simple secret voting-box system to reduce embarrassment and guarantee confidentiality in low-development settings [18]. Its use has been shown to be associated with greater disclosure of socially proscribed behaviour in the study population [24]. Process indicators examined comprised changes in knowledge and psychosocial status and indicators of programme coverage and quality.\n\nBODY.METHODS.SAMPLE-SIZE CALCULATIONS:\nInitial sample-size calculations assumed 20% HIV-1 prevalence at baseline, 30% loss to follow-up after 2 y, and 80% power to detect a 40% reduction in HIV-1 incidence in the intervention communities compared with control communities, assuming a background yearly incidence of 2%. Based on six pairs of communities and a co-efficient of variation between communities of 0.15, the required sample size in each community was 1,000. Funding constraints and slower than anticipated implementation of intervention activities led to revisions of the sample size for each community to 800 and the length of follow-up to 3 y, respectively. Assuming a proportionate increase in loss to follow-up to 41%, these arrangements also yielded 80% power to detect a 40% reduction in HIV-1 incidence.\n\nBODY.METHODS.STATISTICAL METHODS:\nTo test the randomisation with small numbers of communities, HIV-1 prevalence, STI history, and socio-demographic characteristics were compared at baseline for study participants in the intervention and control communities, together with uptake of STI treatment and VCT services offered at baseline. Outcome and process indicators were compared for intervention versus control communities. Analysis of the primary outcome was on an intention-to-treat basis. Incident events and person-years at risk of seroconversion were used to calculate HIV-1 incidence rates and unadjusted and adjusted incidence rate ratios (IRR) with 95% confidence intervals (CIs) for each pair of communities. Adjustment was made for sex, 3-y age group, and community-level baseline HIV prevalence. The overall IRRs (unadjusted and adjusted) were taken to be the geometric means of the IRRs for the six pairs of communities. We calculated 95% CIs for each geometric mean as geometric mean ± 1.96 × standard error of the geometric mean. Paired student t-tests on the logarithms of the pair-specific IRRs were used to test whether these differed significantly from unity [25]. The coefficient of variation between communities was calculated based on baseline HIV prevalence using a standard procedure for pair-matched studies [26]. Analyses of prevalence for secondary outcome and process variables were conducted separately for male and female respondents seen at both survey rounds by fitting logistic regression models to the individual-level data and adjusting for community pair and, where available, value of variable at baseline. Since most programme activities were targeted and overall coverage of programme activities was therefore limited, sub-group analyses, adjusted for community pair, were done for HIV-1 incidence and behavioural outcomes to assess the individual-level effects of attendance at programme meetings. Data were entered and validated using SPSS-PC (http://calcnet.mth.cmich.edu/org/spss/index.htm) and data analysis was conducted in Stata version 7 (http://www.stata.com). Statistical tests were double-sided and results were taken to be significant at the 5% level.\n\nBODY.METHODS.ETHICAL APPROVAL:\nAll study participants in the intervention and control communities were offered free VCT for HIV-1, an information sheet on HIV/AIDS, results from a diagnostic test for Trichomonas vaginalis [27] (done at baseline only), and free treatment for T. vaginalis and other STIs from a research nurse. Testing and treatment for T. vaginalis was provided because the prevalence of other curable STIs was low in the study areas [22]. Antibodies reactive with T. vaginalis were detected in DBS eluates following a previously described procedure [27,28]. Written informed consent was sought as a condition of enrolment and continuation in the study. Prior ethical approval was obtained from the Research Council of Zimbabwe, number 02187; the Applied and Qualitative Research Ethics Committee in Oxford, United Kingdom, N97.039; and the UNAIDS Research Ethics Committee, ERC 98/03.\n\nBODY.RESULTS.PARTICIPANT FLOW:\nIn round 1 of the census (July 1998 to February 2000), 5,943 and 6,037 eligible individuals in the intervention (total population size 18,104) and control (18,633) communities, respectively, were selected for recruitment into the study cohort (Figure 2). In round 2, 3 y later (July 2001 to February 2003), 1,044 (23%) and 1,144 (26%) of baseline respondents who were still alive had migrated away from the intervention and control communities, respectively, and were therefore lost to follow-up (Figure 2). At both baseline and follow-up, migrants and non-migrants had similar risks of HIV-1 infection and associated behaviour [29]. Of those still resident in the intervention and control communities, 2,664 (75%) and 2,564 (77%), respectively, were interviewed and blood samples taken for a second time. Temporary absence from the usual place of residence was the main reason for non-participation in the intervention (n = 794, 95%) and control (n = 698, 94%) communities. The overall proportions of baseline respondents followed up at the end of the study were 55% and 56% in the intervention and control communities, respectively. The median follow-up of communities was 3.0 y (range of median within communities, 3.0–3.1). Figure 2Flow-Chart Comparing Participation and Follow-Up Rates in the Intervention and Control CommunitiesIndividuals enrolled in round 1 and still resident in the study communities were considered eligible for participation in round 2.\n\nBODY.RESULTS.BASELINE DATA:\nHIV-1 prevalence was higher in the intervention communities than in the control communities (24% versus 21%, risk ratio 1.13 [95% CI 1.05–1.22], p = 0.001). T. vaginalis infection, secondary school education, and spatial mobility were more common in the control communities, whilst history of genital discharge and uptake of STI treatment and VCT services offered in the survey were low overall but more frequent in the intervention communities (Table 1). However, the differences in each case were small and were unlikely to be clinically meaningful. Table 1 Baseline Characteristics of the Study Populations \n\nBODY.RESULTS.OUTCOMES AND ESTIMATION:\nMedian follow-up per person was 2.9 y (range 1.4–3.9) and 3.0 y (range 1.5–4.1) in the intervention and control communities, respectively. In total, 4,052 individuals had 212 incident events of HIV-1 during 12,009 person-years at risk, giving an HIV-1 incidence rate of 1.77 per 100 person-years at risk. HIV-1 incidence was higher in communities with higher baseline HIV prevalence (IRR 11.49 [95% CI 1.80–73.40], p = 0.010), but this difference disappeared after adjustment for stratification by community type (p = 0.8). HIV-1 incidence was higher in the intervention communities than in the control communities overall, and in each community type, except in the forestry plantations where it was almost identical (Table 2). The difference was not significant after adjustment for sex, age group, and baseline HIV prevalence (IRR 1.27 [95% CI 0.92–1.75], p = 0.012). The observed coefficient of between-community variation was 0.14. Table 2 HIV Prevalence at Baseline and HIV Incidence and IRRs for Intervention Versus Control Communities Looking at outcome indicators for community members (rather than for communities—the unit of randomisation), self-reported STI symptoms were similar in both sets of communities (Table 3). Treatment for STI symptoms in males was effective more frequently in the intervention communities, with men in the intervention community in five of the six matched pairs reporting reduced symptom recurrence. However, more young women in the intervention than in the control communities had started sex, and reports of unprotected sex with a casual partner in the study period were more common in the intervention communities. No differences were observed in consistent condom use with regular partners between the two sets of communities. In the intervention communities, knowledge about HIV/AIDS was enhanced amongst men, and more respondents reported a close relative or family member with AIDS (sex- and age-adjusted prevalence odds ratio 1.22 [95% CI 1.05–1.42], p = 0.009). Slightly more women in the intervention communities reported that condom use within marriage was becoming acceptable, but a greater proportion of men agreed with the statement that \"condoms reduce the pleasure of sex\". Table 3 Biomedical, Sexual Behaviour, and Psychological Outcomes at Follow-up by Residence in the Intervention and Control Communities A total of 63,261 peer-education meetings were held, and 6.8 million condoms were distributed by the programme in the intervention communities (Table 4). Outputs increased over time as new communities entered the programme. However, owing to high inflation and economic decline, the micro-credit income-generating projects proved impossible to implement. We were able to obtain data on STI episodes treated at clinics in the 11 out of 12 study communities that reported cases to the administrative districts of Mutasa and Makoni. In the three intervention communities each in Mutasa and Makoni, STI cases fell by 66% and 51%, respectively, over the 3-y study period. Similar declines of 67% and 52% occurred at clinics in the four control communities in Mutasa and the one control community in Makoni. Coverage of training in syndromic STI management and systemic counselling for nursing staff was high (Table 4). Table 4 Summary of Service Statistics on Programme Output Most of the activities were targeted at high-risk groups. In the general population sample interviewed in the follow-up survey, 1,779 (35%) and 647 (13%) of 5,098 respondents reported attending an HIV/AIDS meeting and a programme meeting, respectively (Table 5). More respondents in the intervention communities than in the control communities attended an HIV/AIDS meeting (41% versus 28%, prevalence rate ratio 1.44 [95% CI 1.33–1.56], p < 0.001) and a programme meeting (20% versus 5%, 4.27 [95% CI 3.52–5.17], p < 0.001), and participation was higher among men than women (prevalence rate ratio 1.32 [95% CI 1.14–1.53], p = 0.002). Fewer women in the intervention communities had heard about HIV/AIDS from external sources or believed that STI drugs were available at their local clinics. Sixty-two (2%) out of 2,528 respondents in the control communities reported spending at least 1 d in the past month in the intervention communities; the equivalent number for respondents in the intervention communities visiting control communities was 70 (3%) out of 2,683. Table 5 Intervention Coverage \n\nBODY.RESULTS.ANCILLARY ANALYSES:\nIn exploratory analysis to assess where the intervention failed, we found that HIV-1 incidence was reduced in males (IRR 0.48 [95% CI 0.24–0.98], p = 0.044) who reported attending programme meetings, after adjustment for the targeting of activities to groups with high-risk behaviour (Table 6). Amongst men who reported one or more casual sexual partners in the past 3 y, fewer of those who attended meetings reported unprotected sex with these partners (prevalence odds ratio 0.45 [95% CI 0.27–0.75], p = 0.002). HIV-1 incidence was not associated with programme participation in women. Table 6 HIV Incidence and IRRs by Meeting Attendance and Sex \n\nBODY.DISCUSSION.INTERPRETATION:\nWe conducted a scientific trial of the feasibility and impact of an integrated community- and clinic-based HIV-1–prevention intervention. The income-generating projects apart, the intervention activities were feasible. The outputs of the programme were extensive with more than 63,000 meetings being conducted and almost 7 million condoms distributed by trained peer educators. Programme messages were considered relevant and realistic. Local STI treatment and counselling services were strengthened and promoted in accordance with the intervention protocol. For male participants, these activities improved HIV/AIDS knowledge, increased the effectiveness of STI treatment, increased consistent condom use with casual partners, and reduced HIV-1 incidence. However, the cluster-RCT results clearly show that the intervention had no positive impact at the community level and suggest possible detrimental effects on the onset of female sexual activity and condom use with casual partners over a 3-y timeframe. Did the cluster-RCT design fail to capture the true effect of the intervention? There are three possibilities: (1) inadequate statistical power; (2) insufficient follow-up; and (3) contamination of intervention within control communities. The study design provided adequate statistical power to detect a meaningful average reduction (40%) in HIV-1 incidence in the intervention versus the control communities over a 3-y observation period. In hindsight, an effect size of 40% was too optimistic and the study had insufficient power to detect a smaller effect. However, there was no trend in the results towards reduced HIV-1 incidence in the intervention communities. Largely due to migration, attrition was close to that anticipated in the study design and was comparable to other recent cohort studies [6,10,9,30]. Migrants had similar characteristics and sexual behaviour to non-migrants [29]. The results of the exploratory sub-group analysis generate the hypothesis that high-risk behaviour was reduced in males attending programme meetings but did not translate into a wider impact on HIV-1 incidence at the population level. Changes in core and bridge populations may take more time to reflect in the general population than was observed in the trial. However, a longer period of follow-up would have increased attrition, and the finding of a possible adverse effect at the population level meant that it would not have been ethical to continue with the same intervention. Future trials of behaviour-change interventions may need to include multiple rounds with phased recruitment and (where interim results are favourable) may need to consider phased intervention implementation. We minimised intervention contamination by selecting physically separated study communities, and movements between intervention and control communities were rare. However, a similar peer-education programme was implemented in one control community (small town), and HIV-1–prevention activity was considerable in all control communities that also had greater access to information from external sources. In some cases, programme messages (e.g., promotion of condom use) conflicted with those of other agencies working in the intervention communities. The effects of these other programmes could have limited our ability to detect a reduction in HIV-1 incidence caused by the current intervention. The absence of an observed effect of the intervention was not explained by differences in HIV-1 prevalence, sexual behaviour, STI cofactors, mobility, or socio-demographic composition at baseline. The earlier sexual debut in females and greater unprotected sex with casual partners seen in the intervention communities during the study period were not present at baseline but could reflect increased willingness to report high-risk behaviours in settings where there was more open discourse about HIV and AIDS. The peer-education programme could have had some effect for male but not for unmarried female participants. Preliminary findings from subsequent qualitative investigations indicate that, in the predominantly rural communities in which the study was conducted, poverty and the associated failure of income-generating projects meant that some peer educators were unable to maintain safer behaviours. Given their increased visibility within the community—intended to enhance their status and self-esteem and, thus, to reinforce their commitment to and role as models for behaviour change—they may, inadvertently, have served as negative role models and, thereby, may have contributed to the greater female early-age sexual activity. Free distribution of condoms by women still engaging in unprotected commercial sex led to their being poorly valued and reinforced their association with promiscuity.\n\nBODY.DISCUSSION.GENERALISABILITY OF FINDINGS:\nEpidemiological context can affect the impact of interventions [31], and structural obstacles can limit the pace and extent to which activities are implemented and the quality of these activities [32]. The HIV-1 epidemic stabilised in eastern Zimbabwe during the study period, with HIV-1 prevalence declining by 40%–50% in young adults [23]. This decline was accompanied by delayed sexual debut, reduced sexual partner change, and consistent condom use with casual partners [33,23]. Prevalence of syphilis, gonorrhoea, and Chlamydia is low, but non-curable herpes simplex virus type 2 remains common [22]. Risk reduction makes transmission more fragile, and an intervention could have a larger effect when set against secular behavioural changes [2]. Mathematical model simulations suggest that there would also be a greater chance of detecting a significant effect of the intervention even though there would be fewer seroconversions to power the calculation [34,35]. Structural obstacles to intervention implementation included HIV/AIDS mortality which disrupted the programme by claiming the lives of two programme coordinators and several of the nursing staff and peer-educators. Economic decline made the income-generating projects unfeasible and reduced the effectiveness of other components of the intervention. We believe that the coverage of the peer-education programme was satisfactory, given the focus on highly sexually active individuals. Meeting coverage could have been under-estimated in the survey since one-to-one discussions and activities at beer halls and other public places may not have been recognised as meetings by those present. However, the high level of spatial mobility limited the number of people who were reached at the required level of intensity and consistency, whilst national shortages of foreign currency restricted fuel and drug supplies, hampered attempts to extend community activities into the more remote rural areas, and disrupted the STI treatment programme in both the intervention and control communities. The intervention that we evaluated could have greater effect where an HIV-1 epidemic is younger, HIV-1 incidence is greater, local sexual networks are less diffuse, background STI control is weak, herpes simplex virus type 2 is less common, population mobility is lower, and/or the socio-economic climate is stable. We cannot rule out an effect of peer education in the urban intervention community since similar activities were implemented in the control community. Targeted peer education may work better in towns where bar-based sex work is more extensive. The absence of reduced HIV-1 incidence in farming estates reinforces doubts raised by the Harare factory workers study [8] concerning the efficacy of workplace peer education.\n\nBODY.DISCUSSION.OVERALL EVIDENCE:\nThese findings are important since the strategies evaluated—i.e., peer education, condom distribution, and syndromic STI management—are theory-based, have the potential for independent effects [11], and are widely promoted [36,37]. Syndromic STI management was effective in a nascent epidemic [6]. However, our disappointing findings echo those from recent trials [9,12] and emphasise the need for alternative strategies of behaviour-change promotion. Social marketing of condoms [14], larger poverty-alleviation programmes to reduce women's reliance on sex work—based on skills training and careful market research rather than on small-scale income-generating projects—and strategies which reach beyond high-activity core groups, such as the Popular Opinion Leader programme [38,39], and client-centred counselling [40], could be more viable and effective in reducing HIV-1 transmission in rural areas. Given the necessary economic conditions, unmarried women may still play a useful role in bar-based programmes since beer halls remain foci for high-risk behaviour [41,15].\n\nBODY.SUPPORTING INFORMATION.TRIAL REGISTRATION:\nThis trial has the registration number ISRNCT00390949 in the International Standard Randomized Controlled Trial Number Register. Found at: http://www.clinicaltrials.gov/ct/show/NCT00390949?order=1\n Protocol S1Protocol(35 KB DOC)Click here for additional data file. Protocol S2Revisions to Protocol(35 KB DOC)Click here for additional data file. Text S1CONSORT Checklist(48 KB DOC)Click here for additional data file. Text S2Ethical Approval, Information Letter, and Consent Forms(2.8 MB PDF)Click here for additional data file.\n\n**Question:** Compared to Baseline communities that received standard Government services what was the result of Baseline communities that received additional education and preventive measures on Genital discharge history?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
220
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A heart failure self-management program for patients of all literacy levels: A randomized, controlled trial [ISRCTN11535170]\n\n ABSTRACT.BACKGROUND:\nSelf-management programs for patients with heart failure can reduce hospitalizations and mortality. However, no programs have analyzed their usefulness for patients with low literacy. We compared the efficacy of a heart failure self-management program designed for patients with low literacy versus usual care.\n\nABSTRACT.METHODS:\nWe performed a 12-month randomized controlled trial. From November 2001 to April 2003, we enrolled participants aged 30–80, who had heart failure and took furosemide. Intervention patients received education on self-care emphasizing daily weight measurement, diuretic dose self-adjustment, and symptom recognition and response. Picture-based educational materials, a digital scale, and scheduled telephone follow-up were provided to reinforce adherence. Control patients received a generic heart failure brochure and usual care. Primary outcomes were combined hospitalization or death, and heart failure-related quality of life.\n\nABSTRACT.RESULTS:\n123 patients (64 control, 59 intervention) participated; 41% had inadequate literacy. Patients in the intervention group had a lower rate of hospitalization or death (crude incidence rate ratio (IRR) = 0.69; CI 0.4, 1.2; adjusted IRR = 0.53; CI 0.32, 0.89). This difference was larger for patients with low literacy (IRR = 0.39; CI 0.16, 0.91) than for higher literacy (IRR = 0.56; CI 0.3, 1.04), but the interaction was not statistically significant. At 12 months, more patients in the intervention group reported monitoring weights daily (79% vs. 29%, p < 0.0001). After adjusting for baseline demographic and treatment differences, we found no difference in heart failure-related quality of life at 12 months (difference = -2; CI -5, +9).\n\nABSTRACT.CONCLUSION:\nA primary care-based heart failure self-management program designed for patients with low literacy reduces the risk of hospitalizations or death.\n\nBODY.BACKGROUND:\nLimited literacy skills are common among adults in the United States [1]. Low literacy is associated with increased risk of hospitalization and worse control of chronic diseases [1-4]. Heart failure is a common chronic illness requiring multiple medications and significant self-care. Heart failure is the leading cause of hospitalization in the Medicare population [5]. The complexity of care for heart failure puts people with low literacy at considerable risk for adverse outcomes including hospitalization, worse quality of life, and mortality. Heart failure disease-management interventions appear effective in reducing rehospitalizations and improving quality of life [6]. Most randomized clinical trials of heart failure disease management completed over the last 10 years have enrolled patients during, or shortly after, hospitalization and reported the outcome of readmission [6]. Although the designs of these programs vary, several have tested education and support to enhance patient self-management as the main component of the intervention [7-10]. The content of self-management education usually includes teaching to understand medications, reduce salt intake, monitor daily weights, and recognize symptoms. Most programs include structured follow-up either by home visit, phone, or mail. Only a few, uncontrolled studies specifically ask patients to self-adjust their diuretics [11,12]. Heart failure self-management programs may be particularly effective for vulnerable populations, such as those with poor literacy [13,14]. However, to our knowledge, no previous studies have explicitly examined the role of self-management programs in a low literacy population. A recently published study and accompanying editorial suggested that such self-management support may be most effective among vulnerable populations [13,14]. Low literacy may represent a vulnerability for which we should design our programs. Disease management for patients with low literacy may require refined approaches to foster self-management skills. We developed a heart failure self-management program for use by patients with a variety of literacy levels [15]. We performed a randomized controlled trial comparing our self-management program to usual care among outpatients to test if the program could reduce hospitalizations and improve heart failure-related quality of life.\n\nBODY.METHODS.STUDY DESIGN:\nWe conducted a randomized controlled trial in the University of North Carolina (UNC) General Internal Medicine Practice, which serves a wide socioeconomic range of patients. The practice, staffed by over 20 attending faculty and 70 medical residents, cares for over 500 patients with heart failure.\n\nBODY.METHODS.STUDY PARTICIPANTS:\nTo be eligible, patients had to have a clinical diagnosis of heart failure confirmed by their primary provider through a direct interview, and one of the following: 1) chest x-ray findings consistent with heart failure, 2) ejection fraction <40% by any method, or 3) a history of peripheral edema. They also had to have New York Heart Association class II-IV symptoms within the last 3 months. Patients were excluded if they had moderate to severe dementia (based on the treating physician's clinical judgment), terminal illness with life expectancy less than 6 months, severe hearing impairment, blindness, current substance abuse, a serum creatinine >4 mg/dl or on dialysis, a requirement of supplemental oxygen at home, lacked a telephone, or were scheduled to undergo cardiac surgery or awaiting heart transplant. We did not exclude patients on the basis of literacy skill because we felt patients of all literacy levels would benefit from this intervention. Patients who read well often prefer information presented in an easy-to-read format [16]. We accepted referrals from the cardiology clinic if patients met eligibility criteria. This study was approved by the Institutional Review Board of the UNC School of Medicine, and all participants gave informed consent prior to enrollment. For participants who could not adequately read the informed consent document, the research assistant read and explained it to them. They were asked to sign a short form indicating that the informed consent document was reviewed and they agreed to participate. When the short form was used, a witness was asked to attest to the adequacy of the consent process.\n\nBODY.METHODS.PROCEDURES:\nParticipants were recruited between November 2001 and April 2003 from the General Internal Medicine and Cardiology Practices at UNC Hospitals. A trained research assistant screened all patients age 30–80 for use of furosemide. If the patient was on furosemide, their physician was queried about the presence of heart failure. If the patient met eligibility criteria and consented to participate, baseline data were collected. We then randomized patients by concealed allocation based on a random number generator to receive the intervention or usual care. All patients were followed for one year. All data were collected in the General Internal Medicine Practice.\n\nBODY.METHODS.INTERVENTION:\nThe intervention was delivered in the General Internal Medicine Practice. The educational materials and disease management intervention were previously described in detail, and the intervention is summarized here [15]. The intervention began with a 1-hour educational session with a clinical pharmacist or health educator during a regular clinic visit. Patients were given an educational booklet designed for low literacy patients (written below the 6th grade level and extensively pre-tested in focus groups and a pilot study [15]) and a digital scale. The educator and patient reviewed the booklet together, including management scenarios. As part of the educational session, patients were taught to identify signs of heart failure exacerbation, perform daily weight assessment, and adjust their diuretic dose. Because this intervention was aimed at patients with low literacy, the health educator used pedagogic strategies felt to improve comprehension for patients with low literacy [17]. For example, the educator had the patient teach back the information [18], engaged the patient in filling out the notebook, and used brainstorming to help the patient incorporate self-management into their lives. The educator, patient, and primary care physician collaborated to establish the patient's \"good weight\" (i.e., where the patient's heart failure was stable) and baseline diuretic dose. The educator then filled in the management plan in the patient's notebook to help the patient better manage weight fluctuations and self-adjust the diuretic dose based on weight (Figure 1). The general plan involved doubling the dosage if weight went up and halving it if weight went down. The program coordinator then made scheduled follow-up phone calls (days 3, 7, 14, 21, 28, 56) and monthly during months 3–6. The follow-up phone calls, each lasting 5–15 minutes, were designed to reinforce the educational session and provide motivation for the patients. Again, the program coordinator had the patient describe their self-management practices and offered feedback to improve them. Patients experiencing worsening symptoms were scheduled acute visits with their physician. We did not provide specialized nursing assessment, care or medication advice beyond diuretic dosing. If the patient's doctor determined that the good weight had changed, the program coordinator would revise the care plan with the patient. Patients enrolled in the control group received a general heart failure education pamphlet written at approximately the 7th grade level, and continued with usual care from their primary physician. The only contacts between the research team and the control patients were at enrollment and data collection.\n\nBODY.METHODS.MEASURES:\nWe assessed outcomes at 6 and 12 months through in-person interviews and review of the medical record. To be sensitive to low literacy, all interviews were conducted verbally by a trained research assistant. If patients were unable to come to clinic for the interview, it was conducted by phone. The research assistant was not blinded to the patient's study group. Primary outcomes were death or all-cause readmission and heart failure-related quality of life at the end of 12 months. Data on hospitalization dates were obtained from the patient and confirmed by medical chart review. All admissions, regardless of hospital or cause, were counted. For exploratory analyses, we classified reason for admission as cardiac or non-cardiac. Cardiac admissions included those primarily for heart failure (e.g., shortness of breath and edema relieved by diuresis) and other cardiac causes such as chest pain, arrhythmias, or syncope. Cause of admission was determined by chart review by one of the authors (D.D.) who was blinded to treatment allocation. Heart failure-related quality of life was assessed using a modified version of the Minnesota Living with Heart Failure Questionnaire (MLHF). The MLHF is a 21 question instrument with a 6-point Likert scale response option and scores ranging from 0 to 105 [19]. In pilot testing of the MLHF, we found that low literacy patients had trouble answering questions with the standard 6-point Likert scale [15], so we changed the response scale to 4 points, using a visual display with stars to represent increasing severity. The 4-point Likert scale was scored as 0, 1, 3, and 5 to approximate standard scores on the MLHF. Secondary measures included heart failure self-efficacy, knowledge, and behaviors. Self-efficacy was measured with an 8 item scale developed for the behaviors needed in this trial as suggested by self-efficacy scale developers [20]. Respondents used a 4-point Likert scale yielding a total score from 0–24. We assessed heart failure knowledge using a knowledge test previously developed for this population [15], Heart failure self-management behavior was assessed by asking patients how often they weighed themselves. We used patient self-report and the medical chart to measure age, gender, race, insurance status, income, years of education, medication use, years with heart failure, and the presence of co-morbidities. We measured literacy using the Short Test of Functional Health Literacy in Adults (S-TOFHLA) [21], a well-validated scale that correlates well with other measures of reading ability [22]. Patients who scored in the inadequate literacy range on the S-TOFHLA were considered to have \"low literacy.\" The cut-point for inadequate literacy is roughly analogous to the 4th grade reading level.\n\nBODY.METHODS.SAMPLE SIZE:\nSample size was based on pilot study results showing a 9-point improvement in MLHF scores over 3-months with the intervention [15]. Detecting a 9-point difference between intervention and control group with 80% power and alpha set at 0.05 required 70 patients per group. We aimed to enroll 150 patients to account for possible attrition, but stopped at 127 because funding ended. We did not power this study to detect differences in hospitalization, but studies with even smaller numbers of patients have shown a difference for that outcome [7].\n\nBODY.METHODS.STATISTICAL METHODS:\nPatients who did not return any phone calls and did not return for follow-up assessment did not have outcome data for analysis. Patients who withdrew from the study were censored at the time of withdrawal; any data collected prior to withdrawal were included in the analysis. Baseline differences between groups were assessed using t-tests for continuous outcomes and chi-squared tests for categorical outcomes. For MLHF, heart failure self-efficacy and heart failure knowledge, we used two-sample t-tests. Non-parametric tests were also performed for all comparisons, but results did not differ from the parametric tests. Because of the small sample size and the unequal distribution of baseline characteristics, we adjusted for baseline differences using linear regression. Analyses of self-reported outcomes, such as MLHF, excluded patients who died or withdrew from the study before 6 or 12 month data was collected. For hospitalization or death, we used negative binomial regression and censored patients at the time of death or withdrawal from the study. Based on the likelihood ratio test, negative binomial regression was a better fit for the data than a Poisson regression. Additionally, the Vuong test confirmed that a zero-inflated model was inappropriate [23]. Because of uneven distribution of baseline characteristics, we performed analysis of covariance with negative binomial regression to control for baseline differences [24]. We identified the following variables that could contribute to hospitalization or death based on previous studies: age, race, gender, literacy level, hypertension, diabetes, creatinine, MLHF score, use of β-blockers, angiotensin converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), use of digoxin, and presence of systolic dysfunction [7,25]. Variables were not included in the model if the relationship between the variable and the study group or outcome had a p value greater than 0.3. We started with a model including the following items to arrive at the best point estimate: age, gender, hypertension, creatinine, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs. We then eliminated variables from the model if p > 0.30, and if the point estimate remained within 10% of the initial estimate. We prespecified a subgroup analysis in patients with low literacy to analyze if the intervention had a similar effect. The same analysis described above was repeated for the subgroups of patients with low literacy and those with higher literacy. The initial multivariate model for the subgroups analysis included: age, gender, hypertension, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs.\n\nBODY.METHODS.ROLE OF THE FUNDING SOURCE:\nThe funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.\n\nBODY.RESULTS.PATIENTS:\nWe screened 919 patients on furosemide between November 2001 and April 2003. 127 met eligibility criteria and agreed to participate (Figure 2). Of those not enrolled, 407 did not have heart failure according to their physician, 367 did not meet eligibility criteria and 27 declined to participate. Of those randomized to the control group, 1 never returned after the first visit, 1 withdrew during the study and 4 died during the study. Follow-up was completed for all of the remaining participants (98%) (Figure 3). Of those randomized to the intervention group, 3 never returned after the first visit, 4 withdrew during the study and 3 died during the study. Follow-up was completed for all of the remaining participants (93%). At baseline, most characteristics were similar between the two groups (Table 1). However, the control group had more participants with hypertension, fewer with diabetes, and fewer men. Of heart failure related characteristics, the control group had more participants with systolic dysfunction, and taking β-blockers. The intervention group had more participants taking ACE inhibitors or ARBs, and digoxin. Regardless of these differences, none were statistically significant. The control group did have statistically significantly higher baseline MLHF scores representing worse symptoms at baseline.\n\nBODY.RESULTS.HOSPITALIZATION OR DEATH:\nThere were 68 hospitalizations (65) or deaths (3) in the intervention group and 111 (107 hospitalizations, 4 deaths) in the control group. The crude all-cause hospital admission or death incidence rate ratio (IRR) was 0.69 (95% CI 0.40, 1.19). After adjusting for age, gender, use of ACE inhibitor or ARB, use of a β-blocker, presence of hypertension, and baseline MLHF, intervention patients were less likely to have the outcome (IRR = 0.53; 95% CI 0.32, 0.89). 61% of patients in the control group had at least one hospitalization or died, and 42% of patients in the intervention group had at least 1 hospitalization or died (p = 0.13).\n\nBODY.RESULTS.CARDIAC HOSPITALIZATION:\n39% of patients in the control group and 34% of patients in the intervention group had at least one hospitalization for cardiac causes (p = 0.55). The unadjusted IRR was 0.79 (95% CI 0.42, 1.5). After adjusting for baseline differences, the IRR was 0.85 (95% CI 0.44, 1.7).\n\nBODY.RESULTS.HEART FAILURE-RELATED QUALITY OF LIFE:\nIn unadjusted analysis, the control group, on average, improved 5 points on the MLHF and the intervention group improved by 1 point. The difference was not statistically significant (3.5 points, 95% CI 11, -4, p = 0.36). After adjusting for baseline differences between the groups, the difference was 2 points (95% CI 9, -5, p = 0.59) suggesting no effect on heart failure-related quality of life.\n\nBODY.RESULTS.OTHER OUTCOMES.KNOWLEDGE:\nHeart failure related knowledge improved more in the intervention group than in the control group. Mean difference in score improvement was 12 percentage points (95% CI 6, 18; p < 0.001).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-EFFICACY:\nHeart failure self-efficacy improved more in the intervention group than in the control group. Mean difference in score improvement was 2 points (95% CI 0.7, 3.1; p = 0.0026).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-CARE BEHAVIOR:\nSignificantly more patients in the intervention group than in the control group reported daily weight measurement at 12 months (79% vs. 29%, p < 0.001).\n\nBODY.RESULTS.SUBGROUP ANALYSES ACCORDING TO LITERACY:\nTwenty-four patients in each group had inadequate literacy based on the S-TOFHLA (Table 2). Among these patients, there was no difference in quality of life score in unadjusted and adjusted analyses (difference = -1.6; 95% CI -15, 12); p = 0.81). For the combined outcome of hospitalization or death, the unadjusted IRR was 0.77 (95% CI 0.30, 1.94). After adjusting for baseline differences, the IRR was 0.39 (95% CI 0.16, 0.91). Seventy-five patients had marginal or adequate literacy based on the S-TOFHLA. We found no difference in quality of life score in unadjusted and adjusted analyses (difference = -4.2; 95% CI -14, 6; p = 0.40). Among patients in the higher literacy group, the unadjusted IRR for hospitalization or death was 0.65 (95% CI 0.33, 1.3). After adjusting for baseline differences, the IRR was 0.56 (95% CI 0.30, 1.04). We did not find a statistically significant effect modification between literacy and the intervention.\n\nBODY.DISCUSSION:\nA heart failure self-management program designed for patients with low literacy reduced the rate of the combined endpoint of hospitalization or death. The prespecified subgroup analyses suggest that patients with low literacy benefited as much from the intervention as the patients with higher literacy. The success of our intervention reflects the goals of our program. We designed an easy-to-read and use educational booklet and self-management plan, and focused on overcoming barriers to learning self-management [15]. Our intervention was founded on teaching self-management. We focused on helping patients understand signs and symptoms of worsening heart failure and perform self-adjustment of diuretics based on weight fluctuation. Many care providers would not attempt to teach patients, particularly those with low literacy, how to self-adjust their diuretic medication. We found that, with careful teaching, many patients incorporated this strategy into their daily routine successfully, as demonstrated by improved self-care behaviors. Teaching self-adjustment of diuretics, rather than the conventional teaching to call the care provider if weight fluctuates, empowers patients to take more control over their illness. Self-adjustment of diuretic dose is a prominent aspect of the self-management training we provided to the intervention patients. Other programs to improve patient self-management have not been explicit in teaching patients to self-adjust their diuretic dose based on weight fluctuation. Although our outcomes are comparable to others', using this approach puts more control into the hands of the patient. Furthermore, our intervention appears effective among patients with low literacy skills, a group often overlooked for empowering interventions. Our study adds to the growing literature on disease management programs for patients with heart failure [6], particularly those that focus on self-management training [7-10]. Studies focusing on self-management training have demonstrated comparable improvements in hospitalization rates to more comprehensive programs that aim to improve the quality of pharmaceutical prescribing, provide home visits, and take place in specialized cardiology clinics [6]. Such comprehensive programs have also been shown to reduce mortality, but self-management programs have not [6]. We did not detect any difference in heart failure related quality of life which was the outcome we powered our study to detect. Other self-management studies that have found improved quality of life have enrolled patients during a heart failure hospitalization [8,9]; however, we enrolled patients in the outpatient setting while they were clinically stable. Improving quality of life for stable outpatients may be more difficult for this type of intervention. We have previously reported that patients with diabetes and low literacy benefited more from a disease management intervention than those with higher literacy skills [26]. A similar result in two different chronic diseases substantiates the claim that appropriately designed disease management programs may have greater effect for low literacy or vulnerable populations, who are most at risk for adverse outcomes with usual care. This finding is particularly important in light of the recent study by DeBusk and colleagues that did not find a reduction in hospitalization with a well-designed comprehensive intervention [13]. The authors and an accompanying editorial [14] suggested that the failure to detect improvement may have occurred because the patients studied were less at-risk than in other studies. They called for more research to determine better ways of targeting disease management. We believe that low literacy is an important marker for vulnerability to adverse outcomes, and that disease management programs targeted to patients with low literacy may be an effective way of focusing resources on those most able to benefit. If patients with low literacy are to be preferentially recruited for such programs, innovative outreach and screening efforts will likely be required, as patients with low literacy may face particular barriers to accessing such care. This study should be interpreted in light of its limitations. Research assistants were not blind to group assignment during the assessment of self-reported outcomes. As such, patients in the intervention may have been more likely to inflate their responses in an effort to please the interviewer. This effect would tend to inflate patient responses to the subjective assessments of heart failure-related quality of life, self-efficacy, and self-care behaviors. The MLHF questionnaire was modified from its original form to make it easier for patients with low literacy to respond. This change in the scale may have changed its ability to detect important changes in heart failure related quality of life. Because the groups' mean scores were almost identical, we do not feel this limitation changed our results. In a similar vein, most questionnaires are not validated in low literacy populations, raising questions as to their ability to perform to the same standards. Our sample size was small, which did not allow for an even distribution of baseline variables among the groups. We controlled for baseline differences between groups in our analysis. While it is controversial whether or not to control for baseline differences in randomized controlled trials, some analysts have argued that doing so improves the power without introducing bias [24]. A larger, multi-site study would offer better control of confounders, better generalizability, and more power to determine differences in effect according to literacy. We did not collect data on the resources needed to implement this type of intervention in usual settings, and such a study and cost-effectiveness analysis would be helpful for most interventions of this type. We used health educators, not nurses or physicians, to deliver the intervention. By using less highly trained individuals to deliver the intervention, we enabled nurses and physicians to perform clinical tasks more commensurate with their training. Other studies that have performed global cost estimates have found that the savings from reductions in hospitalizations exceed the cost of the intervention [6].\n\nBODY.CONCLUSION:\nIn conclusion, our heart failure self-management program, designed for patients of all literacy levels, appears to reduce rates of hospitalization and death. Patients with low literacy, and other vulnerable patients, may stand to benefit most from these programs. Further research into the design, implementation, and dissemination of disease management programs for low literacy patients will be crucial for meeting the health care needs of the growing population of patients with chronic illness.\n\nBODY.COMPETING INTERESTS:\nDrs. DeWalt and Pignone have received honoraria and grants from Pfizer, Inc., Dr. Rothman has received grants from Pfizer, Inc., and Dr. Sueta is in their speakers bureau.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nDD conceived of the study, participated in its design and coordination, performed statistical analyses, interpretation of the data, and drafted the manuscript. RM, MB conceived of the study and participated in its coordination. MK, KC coordinated the study, and collected the data, RR, CS participated in study design and interpretation of the data. MP conceived of the study, participated in its design and coordination, and interpretation of the data. All authors reviewed the manuscript for important intellectual content and gave final approval.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to Standard information about self-care what was the result of Follow-up and thorough education on self-care on Baseline beta-blockers intake?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
369
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Formoterol DPI 9 μg what was the result of Budesonide/formoterol pMDI 320/9 μg on Breathlessness Cough and Sputum Scale improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
250
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Short-term comparative study of high frequency chest wall oscillation and European airway clearance techniques in patients with cystic fibrosis\n\n ABSTRACT.BACKGROUND:\nHigh frequency chest wall oscillation (HFCWO) is standard treatment for airway clearance in the USA and has recently been introduced in the UK and Europe. There is little published research comparing HFCWO with airway clearance techniques (ACTs) frequently used in the UK and Europe. The aim of this study was to compare the short-term effects of HFCWO with usual ACTs in patients with cystic fibrosis hospitalised with an infective pulmonary exacerbation.\n\nABSTRACT.METHODS:\nA 4-day randomised crossover design was used. Patients received either HFCWO on days 1 and 3 and usual ACTs on days 2 and 4 or vice versa. Wet weight of sputum, spirometry and oxygen saturation were measured. Perceived efficacy, comfort, incidence of urinary leakage and preference were assessed. Data were analysed by mixed model analysis.\n\nABSTRACT.RESULTS:\n29 patients (72% male) of mean (SD) age 29.4 (8.4) years and mean (SD) forced expiratory volume in 1 s (FEV1) percentage predicted (FEV1%) 38 (16.7) completed the study. Significantly more sputum was expectorated during a single treatment session and over a 24 h period (mean difference 4.4 g and 6.9 g, respectively) with usual ACTs than with HFCWO (p<0.001). No statistically significant change in FEV1% or oxygen saturation was observed after either HFCWO or usual ACTs compared with baseline. 17 patients (55%) expressed a preference for their usual ACT.\n\nABSTRACT.CONCLUSIONS:\nDuring both a finite treatment period and over 24 h, less sputum was cleared using HFCWO than usual ACT. HFCWO does not appear to cause any adverse physiological effects and may influence adherence.\n\nBODY.INTRODUCTION:\nThe last four decades have seen improved survival in patients with cystic fibrosis (CF). The latest figures estimate the median life expectancy of individuals with CF born in 2000–3 to be approximately 40 years.1 Respiratory failure is the major cause of morbidity and mortality in those with CF.2 Accumulation of secretions in the CF airway precipitates chronic infection, causing a progressive deterioration in lung function and eventually respiratory failure and death.3 Airway clearance techniques (ACTs) are an essential component of the management of patients with CF and are considered standard care.4 There is a range of ACTs which augment the normal mucus clearance mechanisms of the lung to facilitate expectoration. These techniques generally aim to promote secretion clearance by altering airflow and mucus viscosity. Typically, high frequency chest wall oscillation (HFCWO) produces compression of the chest wall via an inflatable jacket linked to an air pulse generator. The generator delivers an intermittent flow of air into the jacket which rapidly compresses and releases the chest wall at a variety of frequencies. Consequently, an oscillation in airflow within the airways is achieved. HFCWO has been shown to enhance central and peripheral mucus clearance.5 A number of underlying mechanisms have been hypothesised including increased airflow-mucus interaction causing a reduction in viscolelasticity, production of an expiratory airflow bias which promotes a cephalad movement of mucus and the enhancement and stimulation of ciliary activity.6 7 Published short-term evidence has demonstrated increased sputum clearance and improved pulmonary function with HFCWO compared with no treatment.8 This and other studies have also compared HFCWO with alternative ACTs. Some have demonstrated increased sputum clearance with HFCWO compared with postural drainage and percussion (PD&P).9–11 Others found no significant difference in sputum clearance between HFCWO and PD&P,8 12 13 positive expiratory pressure (PEP),8 high frequency oral oscillation13 or intrapulmonary percussive ventilation.14 Furthermore, some of these studies found no differences in efficacy related to pulmonary function between HFCWO and PD&P,8 12 13 PEP,8 15 oscillating positive expiratory pressure (Flutter, manufactured by Axcan Scandipharm Inc, Birmingham, AL, USA)16 and high frequency oral oscillation.13 One study reported an improved outcome in forced expiratory volume in 1 s (FEV1) in the longer term using HFCWO compared with PD&P.17 Few trials have compared HFCWO with alternative ACTs which are commonly used in the UK and Europe by individuals with CF. Phillips et al18 compared HFCWO (using the Hayek Cuirass) with the active cycle of breathing techniques (ACBT) in hospitalised paediatric patients. Significantly more sputum was cleared with the ACBT. The authors concluded that HFCWO was not an effective airway clearance treatment for children with CF. It is difficult to compare this study directly with those above, as the Hayek Cuirass machine has a different operating mode to the inflatable vest system. A series of Cochrane systematic reviews have found that no one ACT is superior in terms of respiratory function and efficacy.19 Of a number of published studies on HFCWO, only two were deemed of sufficient quality to be included in these randomised control trial systematic reviews.8 12 HFCWO is widely used in the USA where it is considered standard care in CF.20 It has recently been introduced to the UK and Europe, where the mainstay of care for airway clearance in CF is the ACBT, autogenic drainage (AD) and other airway clearance regimens using small devices.4 In the USA these techniques and devices tend to be considered adjuncts, with PD&P and HFCWO remaining the most common ACTs.20 While there is a body of evidence which equates HFCWO and PD&P, there is a need for further trials to compare HFCWO with alternative ACTs to provide a more relevant evidence base for HFCWO in the UK and Europe. The aim of the present study was to compare the short-term effects of HFCWO with patients' usual ACTs in those with CF admitted to hospital with an acute exacerbation of pulmonary infection. The hypothesis was that HFCWO was superior to patients' usual ACTs.\n\nBODY.METHODS.STUDY PARTICIPANTS:\nAll patients admitted to hospital who met the entry criteria were invited to participate in the study. The inclusion criteria were a diagnosis of CF (established by genotype or sweat sodium >70 mmol/l or sweat chloride of >60 mmol/l), FEV1 ≥ 20% predicted, age ≥16 years and an infective pulmonary exacerbation as defined by Thornton et al.21 Exclusion criteria were current severe haemoptysis, rib fractures, pregnancy, inability to give consent and those whose usual ACT was HFCWO. Informed written consent was obtained for all patients and the study was approved by Brompton Harefield and National Heart and Lung Institute research ethics committee.\n\nBODY.METHODS.STUDY DESIGN:\nA randomised crossover design was used to compare HFCWO with patients' usual ACTs, which allowed within-patient variability to be controlled. Over four consecutive days, patients received either HFCWO therapy on days 1 and 3 and their normal ACT on days 2 and 4 or vice versa. Allocation to HFCWO or usual ACT on day 1 was determined using a computer-generated randomisation table.\n\nBODY.METHODS.PROTOCOLS:\nPatients performed their usual ACT or received HFCWO two times daily at the same time. Before starting the study, each patient's usual ACT was reviewed by an experienced senior respiratory physiotherapist. In addition, patients were familiarised with HFCWO (The Vest). This involved the patients using The Vest for a trial period the day before the start of the study, during which time they were given the opportunity to experience all three protocol frequencies at a variety of pressures. Each airway clearance treatment session lasted 30 min and was supervised by the same physiotherapist to ensure optimisation and standardisation of usual ACT and HFCWO performance. All nebulised and inhaled medications were taken before each treatment session in accordance with the patients' individual regimens.\n\nBODY.METHODS.HIGH FREQUENCY CHEST WALL OSCILLATION:\nThe following regimen was identified as current best practice following an in-depth review of the literature and discussion with clinical experts in the USA. Using The Vest Airway Clearance System Model 4 (Hill-Rom UK Ltd, Leicestershire, UK), each patient was fitted with an appropriately-sized, full torso, inflatable, disposable vest connected to the air pulse generator via two flexible tubes. Patients remained in an upright sitting position throughout the 30 min treatment session. HFCWO was applied for 8 min at each of three frequencies in sequence (10, 13 and 15 Hz) with each frequency followed by a 2 min rest period. The pulse pressure was set according to individual patient's reported comfort at all three frequency settings. During both the HFCWO and rest periods, patients were instructed to huff or cough as they felt necessary in order to expectorate loosened bronchial secretions.\n\nBODY.METHODS.USUAL ACT:\nUsual ACTs were in accordance with the guidelines of the International Physiotherapy Group for Cystic Fibrosis.22 Patients performed their usual ACT for 30 min, and for patients practising an assisted ACT, the physiotherapist provided percussion. Patients were allowed to perform combined ACTs where this was their usual practice. This reflected current international practice more accurately and recommendations that ACTs be adapted on an individual basis.23\n\nBODY.METHODS.OUTCOME MEASURES:\nThe primary outcome measure was wet weight of sputum expectorated during a treatment session. Patients were instructed to expectorate all sputum into a preweighed pot during and for 30 min following each treatment session. They were also instructed to collect sputum expectorated at all other times during each 24 h period. All sputum collected was weighed immediately following collection on weighing scales with an accuracy of 0.01 g (BL310; Sartorious UK Ltd, Epsom, UK). FEV1 was measured using a hand-held spirometer (2120; Vitalograph Ltd, Buckingham, UK) in accordance with internationally agreed standards.24 Measurements were taken immediately before and after a 30 min period following each treatment session. Data were analysed using Spirotrac IV Version 4.30 software (Vitalograph Ltd). Pulsed arterial oxygen saturation (SpO2) was measured transcutaneously at rest, for 5 min immediately before, 30 min during and 30 min immediately following each session. SpO2 was measured with a fingertip pulse oximeter (Konica-Minolta Pulsox-300i; Stowood Scientific Instruments, Oxford, UK). The data were analysed using Download 2001 Version 2.8.0 software (Stowood Scientific Instruments Ltd). The perceived efficacy and comfort of each ACT and the incidence of urinary leakage during treatment were measured using 10 cm visual analogue scales (VAS). Each day, after the last treatment session, patients completed three 10 cm VAS with reference to the ACT used that day. On the VAS used, 0 represented not at all effective/comfortable or no urinary leakage and 10 represented extremely effective/comfortable or a lot of urinary leakage. On the fourth and final day, participants were also asked to indicate which ACT they would prefer. An independent observer, blind to the daily method of airway clearance used, performed the spirometry, weighed the sputum samples and collected the 10 cm VAS throughout the study.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nA sample size calculation determined the number of patients required to test for superiority of HFCWO. This was based on a difference of 4 g of sputum between the usual ACT and HFCWO during a single treatment session. A square-root of within SEM of 4 g at the 5% significance level would require 24 patients to achieve 90% power. Data are presented as mean (SD), median (IQR) or n (%) as appropriate. Continuous variables were analysed using a mixed-effects linear regression model. This was to allow the results to be adjusted for a number of factors which are inherent to the design of a crossover trial. The order of treatment randomisation and the day and time of treatment were all entered into the model and their effect on the outcomes was tested. For this trial, the results were also adjusted for the method and position of treatment in the ACT session. In these models, patients were entered as random effects since it was not of interest to quantify the differences between individual patients, but it was important to account for the repeated measurements on each patient. The estimates of the fixed effect of ACT versus HFCWO are presented as mean (95% CI). A p value of 0.05 was taken to be statistically significant. All analyses were conducted using Stata 9.2 (StataCorp).\n\nBODY.RESULTS.PARTICIPANTS:\nFifty patients were invited to participate in the study, 20 declined, 29 patients completed the study and 1 patient was withdrawn due to a hypoglycaemic episode. Table 1 shows demographic and baseline characteristics of the patients who completed the study. Table 1 Patient demographics and baseline characteristics (n=29) Age (years) 29.4 (8.4) Male (%) 21 (72%) Height (cm) 171 (9) Weight (kg) 60 (11) BMI (kg/m 2 ) 20.4 (2.6) FEV 1 (l) 1.46 (0.72) FEV 1 % predicted 38 (16.7) SpO 2 (%) 94.3 (2.1) Data are presented as mean (SD) or n (%) as appropriate. BMI, body mass index; FEV 1 , forced expiratory volume in 1 s; SpO 2 , pulsed arterial oxygen saturation. Twenty-nine patients were treated with intravenous antibiotics as part of their medical management. All participants received two treatment sessions on each study day and all treatment sessions were 30 min in duration. The mean (SD) length of stay for patients was 14 (5) days and the mean day of entry to the study was day 8 (3) days.\n\nBODY.RESULTS.USUAL ACT:\nUsual ACTs included the ACBT with modified PD&P (41%; n=12) and with modified PD alone (7%; n=2), AD in sitting (28%; n=8) and with modified PD (7%; n=2), PEP (7%; n=2) and Flutter (10%; n=3).\n\nBODY.RESULTS.SPUTUM WEIGHT:\nThe wet weight of sputum expectorated with usual ACT compared with HFCWO is shown in table 2. The mean weight of sputum expectorated during a single treatment session and over a 24 h period was significantly greater with usual ACT than with HFCWO. The mean difference in wet weight of sputum expectorated during a treatment session was 4.4 g (p<0.001) and the mean difference in wet weight of sputum expectorated over a 24 h period was 6.9 g (p<0.001). These findings were not affected by order, time, day or position of treatment. Table 2 Wet weight of sputum expectorated: HFCWO compared with usual ACT Period of sputum collection Expectorated sputum wet weight (g) Usual ACT HFCWO Mean difference p Value Mean (SD) Median (IQR) Mean (SD) Median (IQR) Single treatment session 9.1 (7.9) 7.2 (3.0–14.2) 4.6 (4.1) 3.4 (1.5–6.7) 4.4 (3.5 to 5.4) <0.001 24 h (excluding treatment) * 22.4 (26.8) 12.9 (4.0–29.9) 24.9 (25.8) 15.3 (3.9–40.1) −1.5 (−4.6 to 1.6) 0.352 24 h (including treatment) 39.8 (36.3) 25.5 (14.0–57.1) 34.3 (30.7) 26.3 (12.1–46.0) 6.9 (3.1 to 10.8) <0.001 Data are presented as mean (SD) or (95% CI) or median (IQR) as appropriate. Data are adjusted for randomisation, day, time and position of treatment using a mixed effects linear regression model. * Of 116 24-h sputum samples collected, two were discarded as they were incomplete. ACT, airway clearance technique; HFCWO, high frequency chest wall oscillation. No statistically significant difference was observed in the amount of sputum expectorated when using HFCWO or usual ACT between treatments in a 24 h period.\n\nBODY.RESULTS.PHYSIOLOGICAL MEASURES:\nFEV1 and SpO2 measured before, during and after usual ACT and HFCWO treatment sessions are shown in table 3. Table 3 Forced expiratory volume in 1 s and pulsed arterial oxygen saturation at baseline, during and after treatment with usual airway clearance technique and HFCWO Usual ACT HFCWO Baseline During treatment 30 min after treatment Baseline During treatment 30 min after treatment FEV 1 % predicted 39.1 (16.9) NA 38.9 (17.1) 38.9 (16.8) NA 39.2 (16.7) SpO 2 (%) 94.4 (2.0) 94.4 (1.9) 93.9 (1.6) 94.5 (1.8) 95.0 (1.7) 94.3 (1.7) Data are presented as mean (SD). ACT, airway clearance technique; FEV 1 , forced expiratory volume in 1 s; HFCWO, high frequency chest wall oscillation; NA, not applicable; SpO 2 , pulsed arterial oxygen saturation. \n\nBODY.RESULTS.COMFORT, EFFICACY AND PREFERENCE:\nThe VAS scores for comfort, efficacy and urinary leakage during usual ACT compared with HFCWO are shown in table 4. No significant differences were observed in VAS scores for comfort or urinary leakage between HFCWO and usual ACT. Patients scored the efficacy of their usual ACT significantly higher than for HFCWO (mean difference 14 mm; p=0.002). This was not affected by the order or day of treatment. Of those patients who completed the study, 17 (55%) expressed a preference for their usual ACT over HFCWO. Preference was not predicted by the amount of sputum expectorated. Table 4 Comfort, efficacy and urinary leakage: HFCWO compared with usual ACT Self reported measure Visual analogue scale score (mm) Usual ACT HFCWO Mean difference p Value Comfort 69 (23) 70 (22) −1 (−9 to 7) 0.784 Efficacy 68 (21) 54 (26) 14 (6 to 23) 0.002 Urinary leakage 0 (1) 0 (1) −0.05 (−0.3 to 0.4) 0.791 Data are presented as mean (SD) or (95% CI) as appropriate. Data are adjusted for randomisation, day of treatment and time of treatment using a mixed effects linear regression model. ACT, airway clearance technique; HFCWO, high frequency chest wall oscillation. \n\nBODY.DISCUSSION:\nThere have been few published comparisons between HFCWO using a vest system with the ACTs of the ACBT and AD. This short-term study, carried out in individuals with CF admitted to hospital with an acute infective pulmonary exacerbation, showed that significantly more sputum was expectorated during a single treatment session and over a 24 h period using the patient's usual ACT than with HFCWO. In addition, slightly less sputum was expectorated at all other times (excluding treatment sessions) on usual ACT days compared with HFCWO, but this trend was not statistically significant. These findings were independent of order, time or day and position of treatment. Neither HFCWO nor any of the usual ACTs were associated with any adverse clinical events. A possible factor contributing to the difference in sputum clearance between HFCWO and usual ACT may have been the number and frequency of forced expiratory manoeuvres (FEMs) and the more gentle expiratory manoeuvres of the AD breath that were performed with the usual ACTs. Some studies have standardised the number of coughs and FEMs that patients performed; however, at the time of designing the protocol, the aim was to compare the regimens as currently practised internationally and the frequency of coughs and FEMs was neither standardised nor counted. In retrospect, it would have been of value to have counted the number of coughs and FEMs undertaken during each regimen, but it had not been anticipated that any differences between HFCWO and usual ACTs may be a consequence of the number of FEMs or AD breaths. Theoretically, during the three 8 min periods of HFCWO, fewer FEMs, coughs or AD breaths would be undertaken than during an equivalent period of the ACBT, AD, PEP or Flutter (all of which inherently include FEMs or AD breaths at regular intervals). This difference was supported by observations of the investigators. However, manual cough counts are subject to observer error. Objective cough monitoring using the Leicester Cough Monitor has only recently been validated and should be considered for use in further studies.25 Components of patient satisfaction include efficacy, comfort and convenience. Some studies have formally evaluated patient satisfaction and compliance. One study reported that 50% of subjects chose HFCWO compared with the Flutter, and efficacy was the most frequently cited reason for this choice.16 A later study reported that HFCWO was not preferred over PD&P and intrapulmonary percussive ventilation; furthermore, there was no significant correlation between treatment preference and sputum weight.14 The current study found that patients perceived the efficacy of HFCWO to be statistically significantly less than that of their usual ACT. However, nearly half (45%) of patients expressed a preference for HFCWO. Preference may have been affected by the novelty of a new treatment and it is unknown whether this would continue in the long term. There is no one recommended protocol for the application of HFCWO in the literature. Published studies describe differing numbers and duration of frequencies, length of treatment and airway clearance. Frequencies of 10, 13 and 15 Hz were chosen as it has been reported that maximum mucus transport occurs between 11 and 15 Hz with a peak at 13 Hz.5 6 In addition, the highest oscillated tidal volume flow (peak airflow) occurred between 10 and15 Hz in patients with CF.26 More recent research recommends an individual \"tuning\" method to identify optimum treatment frequencies. These have been shown to vary among individuals and the oscillation waveform,27 but it is unknown whether \"tuning\" increases efficacy. It is possible that a practice effect could have occurred as all patients were new to HFCWO. However, the protocol used in this study did not require the patient to perform any newly learnt physical technique. In addition, the statistical analysis ensured data were adjusted for day of treatment and found no effect. Alternatively, patients' familiarity with their usual ACT may also have had an effect on outcomes. This study was powered to detect a difference of 4 g of sputum expectorated during a single treatment session. Other studies have been based on a difference of 3–3.5 g, which is generally accepted as a clinically important difference.28 29 Wet weight sputum was felt to be an appropriate primary outcome measure in this short-term study in an acute environment. Previous work has found wet weight to be proportional to dry weight sputum.28 30 Emerging non-invasive means of measuring airway clearance may be more sensitive indicators in the future (eg, lung clearance index and electrical impedance tomography). Considering the cost benefit of HFCWO compared with other ACTs and the differing healthcare systems in the USA and the UK, it is unlikely that HFCWO will become the first choice ACT for most individuals in the UK. Further work needs to be undertaken to identify the place of HFCWO in Europe. Patient preference for a treatment regimen may positively influence adherence to treatment in the short term, and nearly half the patients who participated in this study preferred HFCWO to their usual ACT. HFCWO is a safe treatment that facilitates airway clearance in CF but, when compared with patients' usual ACTs, HFCWO led to the clearance of significantly less sputum during a single treatment session and over a 24 h period.\n\n**Question:** Compared to Usual airway clearance techniques what was the result of High frequency chest wall oscillation on Visual analogue scale scores for efficacy?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
450
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Concomitant therapy with Cineole (Eucalyptole) reduces exacerbations in COPD: A placebo-controlled double-blind trial\n\n ABSTRACT.BACKGROUND:\nThe clinical effects of mucolytics in patients with chronic obstructive pulmonary disease (COPD) are discussed controversially. Cineole is the main constituent of eucalyptus oil and mainly used in inflammatory airway diseases as a mucolytic agent. We hypothesised that its known mucolytic, bronchodilating and anti-inflammatory effects as concomitant therapy would reduce the exacerbation rate and show benefits on pulmonary function tests as well as quality of life in patients with COPD.\n\nABSTRACT.METHODS:\nIn this double-blind, placebo-controlled multi-center-study we randomly assigned 242 patients with stable COPD to receive 200 mg of cineole or placebo 3 times daily as concomitant therapy for 6 months during winter-time. The frequency, duration and severity of exacerbations were combined as primary outcome measures for testing as multiple criteria. Secondary outcome measures included changes of lung function, respiratory symptoms and quality of life as well as the single parameters of the exacerbations.\n\nABSTRACT.RESULTS:\nBaseline demographics, lung function and standard medication of both groups were comparable. During the treatment period of 6 months the multiple criteria frequency, severity and duration of exacerbations were significantly lower in the group treated with cineole in comparison to placebo. Secondary outcome measures validated these findings. Improvement of lung function, dyspnea and quality of life as multiple criteria were statistically significant relative to placebo. Adverse events were comparable in both groups.\n\nABSTRACT.CONCLUSION:\nConcomitant therapy with cineole reduces exacerbations as well as dyspnea and improves lung function and health status. This study further suggests cineole as an active controller of airway inflammation in COPD by intervening in the pathophysiology of airway inflammation of the mucus membrane.\n\nABSTRACT.TRIAL REGISTRATION:\nISRCTN07600011\n\nBODY.INTRODUCTION:\nChronic obstructive pulmonary disease (COPD) is considered to be a multi-component disease comprising structural and functional changes inside and outside the lungs. Effective medications for COPD are available and can reduce or prevent symptoms, increase exercise capacity, reduce the number and severity of exacerbations and improve health status. In common clinical use are bronchodilators as β-agonists, anticholinergic drugs and methylxanthines as well as glucocorticosteroids. The clinical effectiveness of these drugs has been shown in many controlled clinical studies [1-7]. Airway inflammation and mucociliary dysfunction in COPD patients have direct clinical consequences on the decline of lung function. As a consequence of cigarette smoking the ciliated epithelium is damaged and the mucus membrane becomes inflamed, resulting in decreased mucociliary transport leading to an accumulation of mucus within the airway so that the likelihood of recurrent respiratory infection is increased. Cineole has positive effects on the beat frequency of the cilias in the mucus membrane and has bronchodilating and anti-inflammatory effects. Therefore, it is appropriate to postulate that cineole will show positive influence on the exacerbations as well as on the lung function in COPD patients – even as concomitant therapy [8-13]. We conducted a randomised, placebo-controlled multi-center trial with the concomitant prescription of cineole – the main constituent of eucalyptus oil – in patients with stable COPD. The primary hypothesis was that cineole would decrease the number, severity and duration of exacerbations. Secondary outcome measures were lung function, severity of dyspnea and quality of life as well as relevant adverse effects.\n\nBODY.MATERIALS AND METHODS.ENROLMENT OF PARTICIPANTS:\nParticipants were recruited in the offices of 4 general practitioners and 7 specialists in pneumology in Germany. The study was carried out during the winter seasons 2003/2004 and 2004/2005 over a treatment period of 6 months in the winter and starting the enrolment in September at the earliest. The participants were 40 to 80 years of age and had airflow limitation with FEV1 of less than 70% and more than 30% of the predicted value (moderate to severe COPD; according to GOLD classification stages 2 and 3 of COPD) [14]. Patients with an increase of more than 15% and more than 200 ml in FEV1 after inhalation of β-agonists (at least 200 μg Salbutamol or equivalent) were excluded according to the definition of COPD of the German Airway-League. [15]. All patients were current smokers or ex-smokers with at least 10 pack years. Patients were excluded if they had severe medical conditions such as bronchial carcinoma, myocardial infarction, alcoholism, heart failure. All randomised patients provided written informed consent and the protocol was approved by the local Ethics Committees at each of the 11 participating centres.\n\nBODY.MATERIALS AND METHODS.TREATMENT GROUPS:\n242 patients were randomly assigned to one of the two treatment groups with stratification according to the clinical centres. All patients were given the necessary dose of capsules each containing 100 mg cineole or no active ingredient. For each group 2 capsules 3 times daily were prescribed resulting in a dose of 600 mg per day for the cineole or no cineole for the placebo group as concomitant therapy. Patients were instructed to take the capsules half an hour before meal so that they could not recognize the smell of cineole. Capsules with active substance and placebo looked absolutely identical and were sealed in blister stripes. The diagnosis of COPD was confirmed according to the current guidelines of \"Global Initiative for Chronic Obstructive Lung Disease\" (GOLD). Frequency, duration, severity and symptoms of exacerbations were defined according to the literature [16-18]. An exacerbation was documented when the duration was more than 3 days or a complex of at least 2 respiratory adverse events with a duration of more than 3 days occurred. Exacerbation severity was defined as: mild (Score = 1, increased need for basic medication of COPD which the individual can manage in its own normal environment), moderate (score = 2, increased need for medication and he/she feels the need to seek additional medical assistance) and severe (score = 3, patient recognise obvious and/or rapid deterioration in conditions requiring hospitalisation). Details for the number, duration and severity as well as treatment and symptoms of exacerbations were recorded in the patient's diary for each day. Since the most relevant differentiation for exacerbations are frequency, duration and severity, the multiple criteria were combined as primary outcome measures for the statistical evaluation. Secondary outcome measures were the single parameters of the exacerbation as well as lung function, symptoms and quality of life. Spirometric measurements were carried out before the beginning of the study determining reversibility of the airflow limitation by inhalation of short acting β2-agonists to assure that the reversibility of lung function was less than 200 ml or 15%. Spirometric measurement included determination of forced expiratory volume in 1 second (FEV1), forced vital capacity (FVC) and vital capacity (VC) at the beginning and after 3 and 6 months. Additionally, symptoms score were determined for dyspnea (scores: 0 = caused no problems, 1 = caused occasionally problems, 2 = caused a lot of problems, 3 = the most important problem the patient had), weekly frequency of dyspnea (scores: 0 = no day was good, 1 = 1–2 days were good, 2 = 3–4 days were good, 3 = nearly every day was good, 4 = every day was good), general conditions (scores: 0 = good, 1 = impaired, 2 = bad, 3 = very bad, 4 = unbearable), cough (scores: 0 = never, 1 = rarely, 2 = occasionally, 3 = often, 4 = very often, 5 = nearly continuously). Diagnosis-related quality of life was determined according to the \"St. George's Respiratory Questionnaire\" [19].\n\nBODY.MATERIALS AND METHODS.VISITS AND RANDOMIZATION:\nBefore randomisation we ascertained the patients' eligibility and conducted spirometry. After the randomisation the following parameters were recorded: height, weight, age, time since first symptoms for the diagnosis of COPD, documentation of allergies, concomitant disease, smoking habits (documentation of pack years), number of exacerbations in the year before during winter time, determination of quality of life and current maintenance therapy. The following control visits were carried out after 1, 2, 3, 4, 5 and 6 months recording exacerbations since last visit, frequency of dyspnea, characterisation of dyspnea, hypersecretion and cough as well as adverse events, compliance and change of therapy. Spirometry was carried out at the beginning of the study as well as after 3 and 6 months of treatment. Quality of life was determined at the beginning and at the end of the study.\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nThe proposed sample size for the present trial was 240 patients for both treatment groups. The sample size was chosen to detect a minimum difference of 15% of exacerbations after 6 months of treatment. Analysis of efficacy was performed with the intention-to-treat-population including all eligible patients who received at least one dose of medication and had at least one follow-up visit. The number of all exacerbations, duration of exacerbations, degree of severity of exacerbation recorded in patients diary during the 6 months treatment period were summarised and the sums compared according to Wei-Lachin's directional test of multiple criteria (equally weighted) [20]. Additionally, single parameters characterising the exacerbations and dyspnea were analyzed exploratory as secondary outcome measures at multiple endpoints according to Wei-Lachin validating the sum-formation. The Wilcoxon-Mann-Whitney-U Test was used for all other secondary outcome measures. Data are expressed as mean values (with SD) and all tests were two-tailed. P-values of 0.05 or less were considered to indicate statistical significance.\n\nBODY.RESULTS:\nA total of 242 patients were randomised and received at least one dose of study medication. 22 patients were excluded from the statistical analysis of efficacy because they did not meet the requirements of the GOLD guidelines since FEV1/VC was > 0.7. 220 patients were eligible according to the GOLD guidelines having COPD of stage II and III. The two treatment groups were well matched with respect to baseline characteristics (table 1). The mean age of the participants at entry was 62 years in both groups. The mean duration of COPD of 13 years as well as 31 pack-years and the basic medication (i.e. ICS, β-agonists, anticholinergics and theophylline) were balanced between the two groups (table 2). Medication was not changed during the treatment period except in occurrence of exacerbations. The baseline lung function and reversibility in both groups were comparable. Treatment compliance was determined by counting the study medication at each visit and was found high and comparable across the treatment groups. Table 1 Base Line Characteristics of the Patients* CHARACTERISTIC PLACEBO (N = 110) CINEOLE (N = 110) Age – yr Mean 62.5 ± 10.2 62.2 ± 9.1 Range 40 – 79 41 – 79 Sex – M/F 75/35 66/44 Weight – kg Mean 79 ± 16 79 ± 14 Range 48 – 120 52 – 113 Height – m Mean 1.71 ± 8.8 1.70 ± 8.0 Range 1.48 – 1.90 1.54 – 1.92 Years since appearance of COPD Mean 12.8 ± 9.7 13.6 ± 10.9 Range 1 – 56 1 – 59 Severity of COPD [number of patients] Moderate COPD (II) 69 68 Severe COPD (III) 41 42 FEV 1 /VC [%] 58.0 58.7 FEV 1 [l] 1.61 ± 0.5 1.62 ± 0.5 FEV 1 % predicted value 54.4 54.9 Reversibility (increase of FEV 1 after inhalation of β-agonist) [%] 5.5 4.5 Allergies (number) 10 9 Pack years 31.5 ± 19 31.3 ± 16 * Plus – minus values are means ± SD. Table 2 Constant concomitant therapy THERAPY PLACEBO (N = 110) CINEOLE (N = 110) Inhaled β-agonists* (LABA and SABA) 83 88 Inhaled anticholinergics 29 35 Inhaled corticosteroids (ICS) 27 25 Theophylline 38 32 * including combinations \n\nBODY.RESULTS.PRIMARY OUTCOME MEASURES.EXACERBATIONS:\nAt baseline the mean exacerbation rate was 3.2 in both groups during the previous year. The number of patients with exacerbations during the treatment period in the cineole group was 31 patients (28.2%) and 50 patients (45.5%) in the placebo group. As primary outcome measure the sum of exacerbations for frequency, duration and severity at all 6 following visits as composite endpoint (equally weighted) were calculated according to the Wei-Lachin Test procedure for multiple criteria and showed a statistically significant difference for the primary outcome measure between both treatment groups (p = 0.0120) Table 3. Calculating these single parameters alone exploratory according to Mann-Whitney-U it could be proven that they were statistically significant too (i.e. for frequency 0.0069 an, duration 0.0210 and for severity 0.0240). Validating these results by Wei-Lachin-Test procedure for multiple endpoints for the number, the degree and the severity of exacerbations, the degree during the 6-month treatment the differences between both treatment groups were statistically significant (p = 0.0016, 0.0031 and 0.0025) which underlines higher sensitivity of this test-procedure. Medication of the exacerbations with additionally applied corticosteroids occurred in 17 cases in the cineole and in 25 cases in the placebo group which was not statistically significant different. Table 3 Mean of sum of number, duration and severity of exacerbations during 6 months of treatment with cineole or placebo* PLACEBO CINEOLE SCORE SCORE P VALUE† Sum of exacerbations (number)# 0.9 ± 1.46 0.4 ± 0.82 0.0069 Sum of duration (days)# 5.7 ± 8.9 4.0 ± 10.9 0.0210 Sum of severity (score)# 1.4 ± 2.2 0.8 ± 1.5 0.0242 Summarized parameter (directional test) 0.0120 * Plus – minus values are means ± SD. † P-Values are for the comparison between the two groups. # The sum of the exacerbation parameters is calculated by addition of the documented parameters at all visits. \n\nBODY.RESULTS.SECONDARY OUTCOME MEASURES.LUNG FUNCTION:\nPatients only discontinued inhaled β2-agonist prior to spirometry testing. After inhalation of a β2-agonist the reversibility of lung obstruction (increase of FEV1) at the start of the study was 4.5% in the cineole group and 5.5% in the placebo group (table 1). After 6 months of treatment the mean FEV1 increased by 78 ml (4.7%) in the cineole group (table 4). The mean differences between both groups were not statistically significant (p = 0.0627). After 6 months of treatment an increase of FVC by 62 ml (2.7%) after cineole therapy and a decline of 25 ml (1.1%) after treatment with placebo was determined. The difference concerning change of FVC and VC between both treatment groups was not clinically relevant. Table 4 Secondary outcome measures during 6 months Of treatment with cineole or placebo for change of lung function and dyspnea symptoms * LUNG FUNCTION AND SYMPTOMS PLACEBO CINEOLE BASE LINE 3 MONTHS 6 MONTHS BASE LINE 3 MONTHS 6 MONTHS p-Value FEV 1 [l] 1.61 ± 0.5 1.62 ± 0.5 1.61 ± 0.5 1.62 ± 0.5 1.67 ± 0.5 1.70 ± 0.6 0.0627 † FVC [l] 2.23 ± 0.8 2.25 ± 0.8 2.22 ± 0.7 2.33 ± 0.8 2.36 ± 0.9 2.36 ± 0.9 0.2409 † VC [l] 2.81 ± 0.8 2.71 ± 0.7 2.68 ± 0.8 2.80 ± 0.8 2.73 ± 0.8 2.72 ± 0.9 0.2060 † Trouble in breathing # 1.8 ± 0.9 2.1 ± 0.9 2.2 ± 1.0 1.9 ± 0.9 2.4 ± 1.0 2.5 ± 1.1 0.0103 & Dyspnea in the morning $ 1.1 ± 0.7 0.9 ± 0.7 0.7 ± 0.7 1.1 ± 0.8 0.7 ± 0.7 0.5 ± 0.6 0.0466 & Dyspnea at rest $ 0.7 ± 0.7 0.4 ± 0.6 0.4 ± 0.6 0.6 ± 0.6 0.3 ± 0.5 0.3 ± 0.5 0.0156 & Dyspnea during exercise $ 2.0 ± 0.6 1.8 ± 0.7 1.7 ± 0.8 2.0 ± 0.6 1.7 ± 0.7 1.5 ± 0.9 0.1252 & * Plus – minus values are means ± SD. Higher scores on the symptoms-sum-score indicate more disease activity. † P-Values are for the comparison of the changes from base line to 6 months between the two groups. & P-Values for the comparison are calculated by the multiple criteria calculation of 6 visits by Wei-Lachin between the two groups. # Scores: 0 = no day was good, 1 = 1–3 days were good, 2 = nearly every day was good, 3 = every day was good in a week. $ Scores: 0 = did not cause any problems, 1 = sometimes caused problems, 2 = caused a lot of problems, 3 = the most important problem the patient had \n\nBODY.RESULTS.SECONDARY OUTCOME MEASURES.DYSPNEA:\nThe differences between both groups after 6 months of treatment are summarised in table 4. The baseline dyspnea scores in the morning, trouble in breathing, dyspnea at rest and dyspnea during exercise were similar in both groups, indicating a moderate level of a dyspnea for most patients at the beginning of the treatment period. Calculating the values at all 6 visits at multiple endpoints the difference between both treatment groups were statistically significant for trouble in breathing, dyspnea in the morning and dyspnea at rest. Dyspnea during exercise did not show a statistically significant difference between treatment groups.\n\nBODY.RESULTS.SECONDARY OUTCOME MEASURES.QUALITY OF LIFE:\nAt 6 months the mean improvement of SGRQ total symptom score was -9.1 after treatment with cineole and -4.1 after treatment with placebo (table 5). The difference between treatment groups was not statistically significant (p = 0.0630). The improvement of the symptom score was statistically significant (p = 0.0224). The differences of changes of activity score and impact score were not statistically significant between the two groups. Table 5 Secondary outcome measures for saint george's respiratory questionnaire (sgrq): Change of total symptom score, symptom score, activity score and impact score During 6 months of treatment with cineole or placebo* SGRQ SCORES PLACEBO CINEOLE P VALUE † BASE LINE 6 MONTHS BASE LINE 6 MONTHS Symptom score 57.4 ± 20.2 48.5 ± 24.9 57.3 ± 20.4 43.8 ± 24.3 0.0224 Activity score 53.4 ± 21.9 50.0 ± 24.8 52.1 ± 20.5 43.5 ± 22.4 0.2032 Impact score 37.2 ± 20.9 33.9 ± 23.3 35.8 ± 20.2 27.4 ± 19.2 0.1126 TOTAL SYMPTOM SCORE 45.6 ± 18.9 41.3 ± 22.5 44.4 ± 17.8 34.5 ± 18.9 0.0630 * Plus – minus values are means ± SD. Lower scores on the scores indicate higher quality of life. † P-Values are for the comparison of the changes in scores from base line to 6 months treatment between the two groups. \n\nBODY.RESULTS.MULTIPLE CRITERIA:\nSymptomatic of COPD patients is characterized by lung function, dyspnea and quality of life. In order to include these relevant secondary outcome measures together we used these parameters equally weighted as multiple criteria. The increase of FEV1, amelioration of dyspnea and improvement of total score of \"SGRQ\" were calculated according to the Wei-Lachin Test procedure for multiple criteria and showed a statistically significant difference for the relevant secondary outcome measure between both treatment groups (p = 0.0024).\n\nBODY.RESULTS.MULTIPLE CRITERIA.OTHER FINDINGS:\nConcomitant therapy with β-agonists and anticholinergics, corticosteroids or combinations and methylxanthines in both groups were comparable (table 2). The global assessment of efficacy showed a significant difference, which correlated with the amelioration of clinical findings after treatment with cineole.\n\nBODY.RESULTS.SIDE EFFECTS:\nAll patients receiving the study medication (including those with FEV1/VC > 0.7) were included in the safety examination. During treatment side effects were seen in 22 patients whereas in 17 cases adverse events were not related to the study medication. In the placebo group 11 adverse events were estimated not being related to the study medication whereas 2 cases were interpreted as being related to the study medication (heartburn). During treatment with cineole 9 cases of adverse events were reported whereas 6 adverse events were reported not being related to the study medication. In 3 patients (nausea, diarrhoea, heartburn) the adverse events were estimated being related to the study medication. The difference between the two treatment groups was neither clinically relevant nor statistically significant. Safety examinations of the global assessment showed no difference between the two treatment groups. During the 6 months of treatment compliance was good in all patients.\n\nBODY.DISCUSSION:\nPatients with COPD experience exertional breathlessness caused by bronchoconstriction, mucous secretion, edema of the airway wall and loss of attachments to the terminal airways [14]. Hence pharmacological therapy has focused on the treatment of airway obstruction and inflammation to improve symptoms primarily dyspnea as well as health status. Bronchodilators are the mainstay of pharmacotherapy for patients with COPD. On the other hand it is well known that mucociliary dysfunction has direct clinical implications. Mucus is beneficial in normal quantities but in case of mucus hypersecretion when cilia fail, the mucus pool allows bacterial colonisation. The presence of pooled bacteria results in the release of bacterially-derived toxins that destroy the underlying epithelium and trigger a neutrophilic response [21]. Taking into account the known pharmacological effects of the defined natural product cineole it was assumed that this compound might be beneficial for patients with COPD. Exacerbations have been shown to be reduced in various studies evaluating treatment with inhaled β-agonists or corticosteroids or combinations. The major finding of the present study is that cineole provides a significantly greater reduction of frequency, duration and severity of exacerbations than placebo. The exacerbations were analyzed as multiple criteria for the relevant specifications frequency, duration and severity. The result for the primary outcome measure could be validated exploratory for the single parameters showing a statistical significant difference too. Therefore, both testing procedures are valid whereas the testing multiple criteria is more sensitive. This proof of efficacy is an important contribution to the known pharmacological properties of cineole which therefore is not a mucolytic agent only. The result of this study suggests important new evidence of superior therapeutic efficacy of additional therapy with cineole to better control COPD exacerbations compared to the currently recommended combined therapy with ICS and LABA. Furthermore, additional therapy with cineole may positively interact with anti-inflammatory activity of recommended airway therapies in COPD and may serve to protect airways from other environmental agents. In general, quality of life deteriorates slowly in patients with COPD. During the period of 6 months treatment of this study we observed a decrease of the scores of SGRQ in both treatment groups. The reason for this finding in the placebo group, too, seems to be due to patients receiving better medical attention when involved in clinical trials. The higher rate of exacerbations in the winter before the study began in both treatment groups is due to the same reason. Our present data with cineole therapy underline a greater improvement than after treatment with placebo. Differences in change of FEV1 were not statistically significant (p = 0.0627). These findings correlate with a decline of FEV1 in the placebo group of 0.4% and an increase of 4.8% in the cineole group. This value is nearly identical with the increase of FEV1 after the inhalation of β-agonists, when testing the reversibility, before concomitant therapy with cineole started for six months.\n\nBODY.CONCLUSION:\nThese collective findings underline that cineole not only reduces exacerbation rate but also provides clinical benefits as manifested by improved airflow obstruction, reduced severity of dyspnea and improvement of health status. Therefore, cineole can provide a useful treatment option for symptomatic patients with COPD in addition to treatment according to the guidelines. These results have to be seen in context with socio-economic aspects. As COPD is an extremely costly disease and a cause of major financial and social burden concomitant therapy with cineole can be recommended, especially due to the lack of relevant side effects and relatively low cost. The results of our study provide good evidence that cineole will show benefits as additional therapeutic regimen in patients with COPD. These findings correspond to the interpretation of the efficacy-study with Carbocysteine but not with Acetylcysteine, because this medication did not show a significant reduction of exacerbations [22,23].\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nThe study was designed and the protocol developed by HW, C S and UD. CS worked as principal investigator. Statistical analysis was carried out by UD. The results were interpreted by HW, CS and UD. All authors gave substantial critical input in revising the manuscript.\n\n**Question:** Compared to placebo what was the result of cineole on Side effects?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
232
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A prospective, contralateral comparison of photorefractive keratectomy (PRK) versus thin-flap LASIK: assessment of visual function\n\n ABSTRACT.PURPOSE:\nTo compare differences in visual acuity, contrast sensitivity, complications, and higher-order ocular aberrations (HOAs) in eyes with stable myopia undergoing either photo-refractive keratectomy (PRK) or thin-flap laser in situ keratomileusis (LASIK) (intended flap thickness of 90 μm) using the VISX Star S4 CustomVue excimer laser and the IntraLase FS60 femtosecond laser at 1, 3, and 6 months postoperatively.\n\nABSTRACT.METHODS:\nIn this prospective, masked, and randomized pilot study, refractive surgery was performed contralaterally on 52 eyes: 26 with PRK and 26 with thin-flap LASIK. Primary outcome measures were uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), contrast sensitivity, and complications.\n\nABSTRACT.RESULTS:\nAt 6 months, mean values for UDVA (logMAR) were −0.043 ± 0.668 and −0.061 ± 0.099 in the PRK and thin-flap LASIK groups, respectively (n = 25, P = 0.466). UDVA of 20/20 or better was achieved in 96% of eyes undergoing PRK and 92% of eyes undergoing thin-flap LASIK, whereas 20/15 vision or better was achieved in 73% of eyes undergoing PRK and 72% of eyes undergoing thin-flap LASIK (P > 0.600). Significant differences were not found between treatment groups in contrast sensitivity (P ≥ 0.156) or CDVA (P = 0.800) at postoperative 6 months. Types of complications differed between groups, notably 35% of eyes in the thin-flap LASIK group experiencing complications, including microstriae and 2 flap tears.\n\nABSTRACT.CONCLUSION:\nUnder well-controlled surgical conditions, PRK and thin-flap LASIK refractive surgeries achieve similar results in visual acuity, contrast sensitivity, and induction of HOAs, with differences in experienced complications.\n\nBODY.INTRODUCTION:\nRefractive surgery is one of the most commonly performed elective procedures and will likely maintain its popularity as ablation techniques become more refined and understanding of corneal wound healing improves. Two of the most common methods of refractive surgery are photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK). The rapid improvement in vision and lack of postoperative pain associated with LASIK has made this the preferred option with patients compared with PRK, which has greater postoperative discomfort and prolonged recovery of visual acuity.1 Recently, there has been renewed interest in PRK because of increasing concerns of complications associated with LASIK flap creation, including dry eye, corneal ectasia, and flap tears.2–5 Thin-flap LASIK attempts to gain benefits of both techniques by creating a flap of between 80 and 90 μm.6–8 Use of a thinner flap results in a more biomechanically stable cornea and decreases incidence of ectasia given the thicker residual stroma.3,9 Cutting a thinner LASIK flap is less invasive to the nerves within the corneal stroma, decreasing the severity and duration of dry eye, possibly by preserving corneal sensation and blinking rate.10–14 Flap creation avoids corneal epithelium removal, allowing reduced healing time and less haze and scarring.15 The present contralateral study compares the outcomes of eyes that have undergone PRK or thin-flap LASIK using the VISX STAR S4 excimer laser (VISX Incorporated, Santa Clara, CA), with flaps created with intended thicknesses of 90 μm using the IntraLase FS60 femtosecond laser (Abbott Medical Optics [AMO], Santa Ana, CA).\n\nBODY.METHODS:\nData from myopic eyes were analyzed, with or without astigmatism, in which the dominant eye was randomized (Research Randomizer software – Urbaniak, www.randomizer.org) to PRK or thin-flap LASIK (90 μm flap) and the nondominant eye underwent the alternative treatment. All PRK and thin-flap LASIK treatments were performed using the VISX Star S4 CustomVue laser at the John A. Moran Eye Center, Salt Lake City, Utah, between February 2008 and July 2009. All surgeries were overseen by two surgeons (M.M., M.D.M.). The research protocol was approved by the University of Utah Hospital Institutional Review Board. All patients included in this study met the US Food and Drug Administration guidelines for VISX CustomVue LASIK. Mean age of patient, 13 men and 13 women, was 30.8 years (range: 23–46). Twenty-six patients (52 eyes) with stable myopia (1.5–8.5 diopters [D]) and astigmatism (0.242–3.11 D) were enrolled in the study. Eleven patients excluded from this study had clinically significant lens opacities, previous corneal or intraocular surgery, keratoconus, unstable refraction, autoimmune disease, immunosuppressive therapy, or were pregnant or breastfeeding. Correction was made for distance and patients desiring monovision correction were excluded. Contact lenses were discontinued 2 weeks prior to screening for soft contact lens wearers and 6 weeks prior to screening for rigid gas permeable lens wearers. All patients had a preoperative examination including assessment of uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), tonometry, slitlamp examination of the anterior segment, and dilated fundus examination. Manifest and cycloplegic refractions were repeated on 2 separate visits to ensure reliability and stability. Corneal topography and thickness were measured using the Orbscan II v.3.0 (Bausch and Lomb, Rochester, NY). All eyes received 5 preoperative wavefront analyses with the VISX CustomVue WaveScan aberrometer v.3.62 (Fourier) (AMO), without pharmacologic intervention, under mesopic conditions, with a minimum pupil diameter of 6.0 mm. The contralateral study design was made so that each eye could act as a control for the fellow eye in each patient, allowing for study groups to be well matched. There were no violations in the randomization; all patients were analyzed as originally assigned. The randomization protocol was generated before the trial and known only to the study coordinator. In all patients, the emmetropic correction target was based on manifest refraction and wavefront analysis. All flaps were created with the IntraLase FS60 femtosecond laser at 60 kHz in a raster pattern with bed energy of 1.15 μJ, side-cut energy of 2.00 μJ, and pocket enabled. The flaps were created with an intended thickness of 90 μm, diameter of 8.4 to 9.0 mm, superior hinge angle of 55°, and a side-cut angle of 70°. Intraoperative pachymetry or optical coherence tomography were not performed to evaluate actual flap thicknesses. If the 8.0 mm maximum intended ablation diameter exceeded the flap diameter, the hinge and flap were shielded during ablation. Postoperatively, each eye undergoing thin-flap LASIK received 1 drop of gatifloxacin 0.3% (Zymar; Allergan Inc, Irvine, CA), prednisolone acetate 1% (Pred Forte, Allergan Inc), ketorolac tromethamine 0.4% (Acular LS, Allergan Inc.), and a bandage soft contact lens (Softlens Plano T, Bausch and Lomb, Rochester, NY). The prednisolone acetate was continued hourly during the first preoperative day and 4 times daily for an additional 6 days. The gatifloxacin was continued 4 times daily for 1 week. In eyes undergoing PRK all eyes had their corneas cooled with 15 mL of BSS (2.8–3.9°C) immediately following ablation. This was followed by 1 drop of a gatifloxacin 0.3% (Zymar), prednisolone acetate 1% (Pred Forte), ketorolac tromethamine 0.4% (Acular LS) and a bandage soft contact lens (Softlens Plano T). Ketorolac tromethamine was continued 4 times a day for 3 days and then discontinued. Gatifloxacin and prednisolone acetate were continued 4 times a day for 1 week with a subsequent steroid taper over 2 to 3 months per surgeon preference. Mitomycin C was not administered to any patient in the study at any time. Both bandage soft contact lenses were removed simultaneously once re-epithelialization was complete, typically on postoperative days 3 to 5. Patients were seen 1 day, 1 week, 1 month ± 10 days, 3 months ±14 days, and 6 months ±14 days. At all follow-up examinations, UDVA and CDVA were tested using a standard Snellen eye chart. Visual acuity was recorded in both Snellen notation and logarithm of the minimum angle of resolution (logMAR) format. Contrast sensitivity was measured in controlled mesopic conditions at 3, 6, 12, and 18 cycles per degree (cpd) using the Vectorvision CSV-1000E chart (Vectorvision, Greenville, OH). Higher-order aberrations (HOAs), including coma Z(3,1), trefoil Z(3,3), and spherical aberration Z(4,0), were measured using the CustomVue WaveScan at a mean diameter of 6 mm. Undilated scans of both eyes were taken preoperatively and 1, 3, and 6 months postoperatively. Primary outcome measures were UDVA, CDVA, contrast sensitivity, and complications. HOAs were measured and trended within groups as secondary measures. After the study was completed, the results were compiled and the data unmasked for statistical analysis. Refractive error, visual acuity, and HOAs were treated as continuous variables and analyzed for significance by independent t-tests. In all tests, P values <0.05 were considered statistically significant. Data analysis was done using Microsoft Excel (Microsoft Corp, Redmond, WA).\n\nBODY.RESULTS:\nMean preoperative measurements of UDVA, CDVA, sphere, and cylinder are shown in Table 1. 25 of 26 patients (50 eyes) completed the study at postoperative 6 months. One eye in the thin-flap LASIK group required PRK retreatment following a flap tear and both eyes from this patient were therefore removed from analysis of visual acuity, contrast sensitivity, and HOAs as the retreatment prevented the ability to distinguish results between the 2 surgical methods. The eyes from this patient were still included in the analysis of complications.\n\nBODY.RESULTS.VISUAL ACUITY:\nTable 2 shows visual acuity outcomes at 1, 3, and 6 months postoperatively. Statistically significant differences were found between PRK and thin-flap LASIK in UDVA at 1 month postoperatively, with thin-flap LASIK eyes showing more improvement in UDVA. Visual acuities were not statistically different between the groups at 3 or 6 months.\n\nBODY.RESULTS.STABILITY, EFFICACY, AND PREDICTABILITY:\nTable 3 shows stability, efficacy, and predictability outcomes postoperatively at 1, 3, and 6 months. CDVA was statistically different between groups at 1 month, with 24% of the PRK group losing a line or more from preoperative values, while 9% of eyes in the thin-flap LASIK group lost only 1 line at 1 month. No eyes in the thin-flap LASIK group lost more than 1 line. Also, 39% of eyes in the thin-flap group gained a line by 1 month compared with only 12% of eyes in the PRK group. At 6 months 64% and 56% of eyes had gained a line or more of CDVA in the PRK and thin-flap LASIK groups, respectively (P = 0.462).\n\nBODY.RESULTS.CONTRAST SENSITIVITY:\nContrast sensitivity measurements at 3, 6, 12, and 18 cycles per degree (cpd) in each group are shown in Figure 1. There were no differences between groups at any cpd at any time in the study (P ≥ 0.156). The thin-flap LASIK group showed no change in contrast sensitivity postoperatively (P > 0.131), while patients in the PRK group had a slight decrease in contrast sensitivity at 1 month seen at 3 and 12 cpd (P = 0.004) and (P = 0.025), respectively. At 6 months contrast sensitivity in the PRK group was still significantly decreased from baseline at 3 cpd (P = 0.013), although it did not reach a statistically significant difference at 3 months (P = 0.101).\n\nBODY.RESULTS.COMPLICATIONS:\nTypes of complications differed between the 2 groups. In the PRK group, 2 cases of epithelial defects occurred by 1 week, but had completely resolved by 6 months. Three eyes in the PRK group had mild haze appearing as early as 1 week postoperatively. Haze remained in only 1 eye at 6 months, but was classified as minimal and had no effect on UDVA or CDVA. Nine eyes (35%) in the thin-flap LASIK group experienced complications. In the thin-flap LASIK group, flap debris (1 eye), diffuse lamellar keratitis (DLK, 1 eye), and an epithelial cyst at the edge of 1 flap were observed, with no loss of UDVA or CDVA, and all resolved by 6 months. Microstriae were observed in 6 eyes, one of which was the eye described above with flap debris and the other was the eye with DLK, with no associated loss of UDVA or CDVA, with epithelial proliferation noted as filling the microstria and making them less apparent. Two eyes in the thin-flap LASIK group experienced flap tears intraoperatively – one resulting in mild flap edge scarring by 6 months that had no significant effect on visual function, and the other case affecting vision at 1 month postoperatively which was retreated with PRK at 3 months. As a result of the retreatment with the counter surgical technique, the ability to accurately compare visual acuity, contrast sensitivity, and HOAs between the 2 surgical methods was limited and both eyes from this patient were removed from analysis of these measures, but were still included in the analysis of complications.\n\nBODY.RESULTS.HIGHER-ORDER ABERRATIONS:\nAt postoperative 1, 3, and 6 months, 24 (96%), 25 (100%), and 24 (96%) eyes, respectively, in each group completed CustomVue WaveScan analysis. Total root-mean square (RMS) HOAs, coma, trefoil, and spherical aberrations are compared in Figure 2. There were no significant differences between groups in any HOAs throughout the study (P ≥ 0.101), with all P values at 6 months ≥0.63. In both groups, total HOAs (P < 0.008), spherical (P < 0.002), and coma (P = 0.008 at 3 months; P = 0.024 at 6 months) aberrations were significantly increased compared with preoperative conditions. Trefoil showed no significant change throughout the study in either group (P = 0.298).\n\nBODY.DISCUSSION/CONCLUSION:\nThe present study confirms that PRK and thin-flap LASIK are effective surgeries for the correction of low to moderate myopia. Although thin-flap LASIK showed superior visual results in the early postoperative period there was no statistically significant difference in outcomes of UDVA, CDVA, contrast sensitivity, or total RMS HOAs between PRK and thin-flap LASIK by 6 months. In a similar study comparing PRK and thin-flap LASIK, Slade et al also found that UDVA results were better in the thin-flap group early on and equalized by 6 months.16 Our study showed a similar trend, with no significant differences in any of the primary outcomes at 6 months, and with no difference in UDVA at 3 months. Visual regression in our study was similar to outcomes in Slade's study in which 42% of the PRK group lost a line or more of CDVA and 22% of the thin-flap LASIK group lost 1 line at 1 month postoperatively. Despite the use of custom ablation, postoperative increases in total HOAs, sphere, and coma were noted in our study, as also seen by Slade et al, although they noted that the increase in sphere and coma aberrations was significantly higher in the PRK group at 1 and 3 months postoperatively. As found in previous studies, there was no significant change found in trefoil at any time postoperatively.17,18 Our study showed no difference in induction of HOAs between groups at any time. Although increases in HOAs after refractive surgery have been correlated with decreases in contrast sensitivity in other studies, we demonstrate that increases in total RMS, sphere, and coma were seen postoperatively in both groups without a reliable decrease in contrast sensitivity.19,20 Slade's group found that contrast sensitivity was better in the thin-flap group at all postoperative points in the study, which may have been related to their finding of lower induction of sphere and coma aberrations in the thin-flap group compared with the PRK group. The authors recognize that the Slade study had a larger population size (n = 50 per group) and would have increased power to detect significant differences. Our study would have had increased power of analysis with a similar study group size, but results from analysis of HOAs would not likely change as P values for all HOAs at 6 months were ≥0.63. It would be difficult to make any such correlation between contrast sensitivity and HOAs from the results of this study. A loss of CDVA has been associated with the development of corneal haze in other studies, but as mentioned above none of the patients with visual regression developed haze.21–23 Findings in other studies showing that the biomechanics of eyes that have received thin-flap LASIK treatment are indistinguishable from those of PRK have led to suggestions that thin-flap LASIK is the best approach to LASIK.16 Although the present study did not find any statistically significant differences between thin-flap LASIK and PRK in terms of visual quality at 6 months, complications dealing with flap integrity in the thin-flap LASIK group were present which are not complications found in PRK. Although PRK remains a viable option for those unable to undergo LASIK, the use of thinner flaps may eliminate some of the complications seen with traditional LASIK. Larger studies are needed to better compare the complication rates of both methods and to determine how effective thin-flap LASIK will be in achieving the benefits of PRK and LASIK while avoiding the risks associated with each method. While thinner LASIK flaps attempt to preserve the biomechanical stability of the corneal stroma, at the same time, the flap itself becomes less stable, as was noted with the 2 flap tears and other complications occurring in the thin-flap LASIK group in this study. A study by Espandar and Meyer24 showed that most complications in flaps created by IntraLase femtosecond laser occurred at the hinge, which is where the 2 flap tears that occurred in this study. A thinner flap hinge would be biomechanically less stable and would increase the likelihood of intraoperative flap tear occurrence as well. Six of the 9 eyes with complications in the thin-flap LASIK group had microstriae, which are caused by the flattening of a weak corneal flap unable to maintain its curvature over the small area of stroma removed during ablation. The biomechanics of the flap and hinge, however, cannot be evaluated by the design of this study as analysis was done based on intended flap thickness, which has been shown to vary with the IntraLase FS60 femtosecond laser.25 The study could have been strengthened had intraoperative pachymetry or OCT been performed. Creating a flap with increased integrity would help prevent microstriae from forming and would also provide for a stronger hinge that would be less susceptible to flap tear. Possible ways to optimize flap integrity include modification of hinge and side-cut angle creation, as well as improved planarity and microdisruption of flap edges. This will allow improved adhesion of the flap to the underlying stroma. Continued improvements in laser technology may allow for safer creation of thinner flaps, helping to provide evidence for superior outcomes in thin-flap LASIK, permitting the biomechanical stability of PRK with the visual recovery of LASIK. Custom flap formation that minimizes weak areas susceptible to tearing will be helpful in achieving this difficult balance between corneal and flap integrity.\n\n**Question:** Compared to Thin-flap laser in situ keratomileusis (LASIK) what was the result of Photo-refractive keratectomy (PRK) on Corrected distance visual acuity at 1 month?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
227
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: A heart failure self-management program for patients of all literacy levels: A randomized, controlled trial [ISRCTN11535170]\n\n ABSTRACT.BACKGROUND:\nSelf-management programs for patients with heart failure can reduce hospitalizations and mortality. However, no programs have analyzed their usefulness for patients with low literacy. We compared the efficacy of a heart failure self-management program designed for patients with low literacy versus usual care.\n\nABSTRACT.METHODS:\nWe performed a 12-month randomized controlled trial. From November 2001 to April 2003, we enrolled participants aged 30–80, who had heart failure and took furosemide. Intervention patients received education on self-care emphasizing daily weight measurement, diuretic dose self-adjustment, and symptom recognition and response. Picture-based educational materials, a digital scale, and scheduled telephone follow-up were provided to reinforce adherence. Control patients received a generic heart failure brochure and usual care. Primary outcomes were combined hospitalization or death, and heart failure-related quality of life.\n\nABSTRACT.RESULTS:\n123 patients (64 control, 59 intervention) participated; 41% had inadequate literacy. Patients in the intervention group had a lower rate of hospitalization or death (crude incidence rate ratio (IRR) = 0.69; CI 0.4, 1.2; adjusted IRR = 0.53; CI 0.32, 0.89). This difference was larger for patients with low literacy (IRR = 0.39; CI 0.16, 0.91) than for higher literacy (IRR = 0.56; CI 0.3, 1.04), but the interaction was not statistically significant. At 12 months, more patients in the intervention group reported monitoring weights daily (79% vs. 29%, p < 0.0001). After adjusting for baseline demographic and treatment differences, we found no difference in heart failure-related quality of life at 12 months (difference = -2; CI -5, +9).\n\nABSTRACT.CONCLUSION:\nA primary care-based heart failure self-management program designed for patients with low literacy reduces the risk of hospitalizations or death.\n\nBODY.BACKGROUND:\nLimited literacy skills are common among adults in the United States [1]. Low literacy is associated with increased risk of hospitalization and worse control of chronic diseases [1-4]. Heart failure is a common chronic illness requiring multiple medications and significant self-care. Heart failure is the leading cause of hospitalization in the Medicare population [5]. The complexity of care for heart failure puts people with low literacy at considerable risk for adverse outcomes including hospitalization, worse quality of life, and mortality. Heart failure disease-management interventions appear effective in reducing rehospitalizations and improving quality of life [6]. Most randomized clinical trials of heart failure disease management completed over the last 10 years have enrolled patients during, or shortly after, hospitalization and reported the outcome of readmission [6]. Although the designs of these programs vary, several have tested education and support to enhance patient self-management as the main component of the intervention [7-10]. The content of self-management education usually includes teaching to understand medications, reduce salt intake, monitor daily weights, and recognize symptoms. Most programs include structured follow-up either by home visit, phone, or mail. Only a few, uncontrolled studies specifically ask patients to self-adjust their diuretics [11,12]. Heart failure self-management programs may be particularly effective for vulnerable populations, such as those with poor literacy [13,14]. However, to our knowledge, no previous studies have explicitly examined the role of self-management programs in a low literacy population. A recently published study and accompanying editorial suggested that such self-management support may be most effective among vulnerable populations [13,14]. Low literacy may represent a vulnerability for which we should design our programs. Disease management for patients with low literacy may require refined approaches to foster self-management skills. We developed a heart failure self-management program for use by patients with a variety of literacy levels [15]. We performed a randomized controlled trial comparing our self-management program to usual care among outpatients to test if the program could reduce hospitalizations and improve heart failure-related quality of life.\n\nBODY.METHODS.STUDY DESIGN:\nWe conducted a randomized controlled trial in the University of North Carolina (UNC) General Internal Medicine Practice, which serves a wide socioeconomic range of patients. The practice, staffed by over 20 attending faculty and 70 medical residents, cares for over 500 patients with heart failure.\n\nBODY.METHODS.STUDY PARTICIPANTS:\nTo be eligible, patients had to have a clinical diagnosis of heart failure confirmed by their primary provider through a direct interview, and one of the following: 1) chest x-ray findings consistent with heart failure, 2) ejection fraction <40% by any method, or 3) a history of peripheral edema. They also had to have New York Heart Association class II-IV symptoms within the last 3 months. Patients were excluded if they had moderate to severe dementia (based on the treating physician's clinical judgment), terminal illness with life expectancy less than 6 months, severe hearing impairment, blindness, current substance abuse, a serum creatinine >4 mg/dl or on dialysis, a requirement of supplemental oxygen at home, lacked a telephone, or were scheduled to undergo cardiac surgery or awaiting heart transplant. We did not exclude patients on the basis of literacy skill because we felt patients of all literacy levels would benefit from this intervention. Patients who read well often prefer information presented in an easy-to-read format [16]. We accepted referrals from the cardiology clinic if patients met eligibility criteria. This study was approved by the Institutional Review Board of the UNC School of Medicine, and all participants gave informed consent prior to enrollment. For participants who could not adequately read the informed consent document, the research assistant read and explained it to them. They were asked to sign a short form indicating that the informed consent document was reviewed and they agreed to participate. When the short form was used, a witness was asked to attest to the adequacy of the consent process.\n\nBODY.METHODS.PROCEDURES:\nParticipants were recruited between November 2001 and April 2003 from the General Internal Medicine and Cardiology Practices at UNC Hospitals. A trained research assistant screened all patients age 30–80 for use of furosemide. If the patient was on furosemide, their physician was queried about the presence of heart failure. If the patient met eligibility criteria and consented to participate, baseline data were collected. We then randomized patients by concealed allocation based on a random number generator to receive the intervention or usual care. All patients were followed for one year. All data were collected in the General Internal Medicine Practice.\n\nBODY.METHODS.INTERVENTION:\nThe intervention was delivered in the General Internal Medicine Practice. The educational materials and disease management intervention were previously described in detail, and the intervention is summarized here [15]. The intervention began with a 1-hour educational session with a clinical pharmacist or health educator during a regular clinic visit. Patients were given an educational booklet designed for low literacy patients (written below the 6th grade level and extensively pre-tested in focus groups and a pilot study [15]) and a digital scale. The educator and patient reviewed the booklet together, including management scenarios. As part of the educational session, patients were taught to identify signs of heart failure exacerbation, perform daily weight assessment, and adjust their diuretic dose. Because this intervention was aimed at patients with low literacy, the health educator used pedagogic strategies felt to improve comprehension for patients with low literacy [17]. For example, the educator had the patient teach back the information [18], engaged the patient in filling out the notebook, and used brainstorming to help the patient incorporate self-management into their lives. The educator, patient, and primary care physician collaborated to establish the patient's \"good weight\" (i.e., where the patient's heart failure was stable) and baseline diuretic dose. The educator then filled in the management plan in the patient's notebook to help the patient better manage weight fluctuations and self-adjust the diuretic dose based on weight (Figure 1). The general plan involved doubling the dosage if weight went up and halving it if weight went down. The program coordinator then made scheduled follow-up phone calls (days 3, 7, 14, 21, 28, 56) and monthly during months 3–6. The follow-up phone calls, each lasting 5–15 minutes, were designed to reinforce the educational session and provide motivation for the patients. Again, the program coordinator had the patient describe their self-management practices and offered feedback to improve them. Patients experiencing worsening symptoms were scheduled acute visits with their physician. We did not provide specialized nursing assessment, care or medication advice beyond diuretic dosing. If the patient's doctor determined that the good weight had changed, the program coordinator would revise the care plan with the patient. Patients enrolled in the control group received a general heart failure education pamphlet written at approximately the 7th grade level, and continued with usual care from their primary physician. The only contacts between the research team and the control patients were at enrollment and data collection.\n\nBODY.METHODS.MEASURES:\nWe assessed outcomes at 6 and 12 months through in-person interviews and review of the medical record. To be sensitive to low literacy, all interviews were conducted verbally by a trained research assistant. If patients were unable to come to clinic for the interview, it was conducted by phone. The research assistant was not blinded to the patient's study group. Primary outcomes were death or all-cause readmission and heart failure-related quality of life at the end of 12 months. Data on hospitalization dates were obtained from the patient and confirmed by medical chart review. All admissions, regardless of hospital or cause, were counted. For exploratory analyses, we classified reason for admission as cardiac or non-cardiac. Cardiac admissions included those primarily for heart failure (e.g., shortness of breath and edema relieved by diuresis) and other cardiac causes such as chest pain, arrhythmias, or syncope. Cause of admission was determined by chart review by one of the authors (D.D.) who was blinded to treatment allocation. Heart failure-related quality of life was assessed using a modified version of the Minnesota Living with Heart Failure Questionnaire (MLHF). The MLHF is a 21 question instrument with a 6-point Likert scale response option and scores ranging from 0 to 105 [19]. In pilot testing of the MLHF, we found that low literacy patients had trouble answering questions with the standard 6-point Likert scale [15], so we changed the response scale to 4 points, using a visual display with stars to represent increasing severity. The 4-point Likert scale was scored as 0, 1, 3, and 5 to approximate standard scores on the MLHF. Secondary measures included heart failure self-efficacy, knowledge, and behaviors. Self-efficacy was measured with an 8 item scale developed for the behaviors needed in this trial as suggested by self-efficacy scale developers [20]. Respondents used a 4-point Likert scale yielding a total score from 0–24. We assessed heart failure knowledge using a knowledge test previously developed for this population [15], Heart failure self-management behavior was assessed by asking patients how often they weighed themselves. We used patient self-report and the medical chart to measure age, gender, race, insurance status, income, years of education, medication use, years with heart failure, and the presence of co-morbidities. We measured literacy using the Short Test of Functional Health Literacy in Adults (S-TOFHLA) [21], a well-validated scale that correlates well with other measures of reading ability [22]. Patients who scored in the inadequate literacy range on the S-TOFHLA were considered to have \"low literacy.\" The cut-point for inadequate literacy is roughly analogous to the 4th grade reading level.\n\nBODY.METHODS.SAMPLE SIZE:\nSample size was based on pilot study results showing a 9-point improvement in MLHF scores over 3-months with the intervention [15]. Detecting a 9-point difference between intervention and control group with 80% power and alpha set at 0.05 required 70 patients per group. We aimed to enroll 150 patients to account for possible attrition, but stopped at 127 because funding ended. We did not power this study to detect differences in hospitalization, but studies with even smaller numbers of patients have shown a difference for that outcome [7].\n\nBODY.METHODS.STATISTICAL METHODS:\nPatients who did not return any phone calls and did not return for follow-up assessment did not have outcome data for analysis. Patients who withdrew from the study were censored at the time of withdrawal; any data collected prior to withdrawal were included in the analysis. Baseline differences between groups were assessed using t-tests for continuous outcomes and chi-squared tests for categorical outcomes. For MLHF, heart failure self-efficacy and heart failure knowledge, we used two-sample t-tests. Non-parametric tests were also performed for all comparisons, but results did not differ from the parametric tests. Because of the small sample size and the unequal distribution of baseline characteristics, we adjusted for baseline differences using linear regression. Analyses of self-reported outcomes, such as MLHF, excluded patients who died or withdrew from the study before 6 or 12 month data was collected. For hospitalization or death, we used negative binomial regression and censored patients at the time of death or withdrawal from the study. Based on the likelihood ratio test, negative binomial regression was a better fit for the data than a Poisson regression. Additionally, the Vuong test confirmed that a zero-inflated model was inappropriate [23]. Because of uneven distribution of baseline characteristics, we performed analysis of covariance with negative binomial regression to control for baseline differences [24]. We identified the following variables that could contribute to hospitalization or death based on previous studies: age, race, gender, literacy level, hypertension, diabetes, creatinine, MLHF score, use of β-blockers, angiotensin converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), use of digoxin, and presence of systolic dysfunction [7,25]. Variables were not included in the model if the relationship between the variable and the study group or outcome had a p value greater than 0.3. We started with a model including the following items to arrive at the best point estimate: age, gender, hypertension, creatinine, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs. We then eliminated variables from the model if p > 0.30, and if the point estimate remained within 10% of the initial estimate. We prespecified a subgroup analysis in patients with low literacy to analyze if the intervention had a similar effect. The same analysis described above was repeated for the subgroups of patients with low literacy and those with higher literacy. The initial multivariate model for the subgroups analysis included: age, gender, hypertension, MLHF, use of β-blockers, and use of ACE inhibitors or ARBs.\n\nBODY.METHODS.ROLE OF THE FUNDING SOURCE:\nThe funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.\n\nBODY.RESULTS.PATIENTS:\nWe screened 919 patients on furosemide between November 2001 and April 2003. 127 met eligibility criteria and agreed to participate (Figure 2). Of those not enrolled, 407 did not have heart failure according to their physician, 367 did not meet eligibility criteria and 27 declined to participate. Of those randomized to the control group, 1 never returned after the first visit, 1 withdrew during the study and 4 died during the study. Follow-up was completed for all of the remaining participants (98%) (Figure 3). Of those randomized to the intervention group, 3 never returned after the first visit, 4 withdrew during the study and 3 died during the study. Follow-up was completed for all of the remaining participants (93%). At baseline, most characteristics were similar between the two groups (Table 1). However, the control group had more participants with hypertension, fewer with diabetes, and fewer men. Of heart failure related characteristics, the control group had more participants with systolic dysfunction, and taking β-blockers. The intervention group had more participants taking ACE inhibitors or ARBs, and digoxin. Regardless of these differences, none were statistically significant. The control group did have statistically significantly higher baseline MLHF scores representing worse symptoms at baseline.\n\nBODY.RESULTS.HOSPITALIZATION OR DEATH:\nThere were 68 hospitalizations (65) or deaths (3) in the intervention group and 111 (107 hospitalizations, 4 deaths) in the control group. The crude all-cause hospital admission or death incidence rate ratio (IRR) was 0.69 (95% CI 0.40, 1.19). After adjusting for age, gender, use of ACE inhibitor or ARB, use of a β-blocker, presence of hypertension, and baseline MLHF, intervention patients were less likely to have the outcome (IRR = 0.53; 95% CI 0.32, 0.89). 61% of patients in the control group had at least one hospitalization or died, and 42% of patients in the intervention group had at least 1 hospitalization or died (p = 0.13).\n\nBODY.RESULTS.CARDIAC HOSPITALIZATION:\n39% of patients in the control group and 34% of patients in the intervention group had at least one hospitalization for cardiac causes (p = 0.55). The unadjusted IRR was 0.79 (95% CI 0.42, 1.5). After adjusting for baseline differences, the IRR was 0.85 (95% CI 0.44, 1.7).\n\nBODY.RESULTS.HEART FAILURE-RELATED QUALITY OF LIFE:\nIn unadjusted analysis, the control group, on average, improved 5 points on the MLHF and the intervention group improved by 1 point. The difference was not statistically significant (3.5 points, 95% CI 11, -4, p = 0.36). After adjusting for baseline differences between the groups, the difference was 2 points (95% CI 9, -5, p = 0.59) suggesting no effect on heart failure-related quality of life.\n\nBODY.RESULTS.OTHER OUTCOMES.KNOWLEDGE:\nHeart failure related knowledge improved more in the intervention group than in the control group. Mean difference in score improvement was 12 percentage points (95% CI 6, 18; p < 0.001).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-EFFICACY:\nHeart failure self-efficacy improved more in the intervention group than in the control group. Mean difference in score improvement was 2 points (95% CI 0.7, 3.1; p = 0.0026).\n\nBODY.RESULTS.OTHER OUTCOMES.SELF-CARE BEHAVIOR:\nSignificantly more patients in the intervention group than in the control group reported daily weight measurement at 12 months (79% vs. 29%, p < 0.001).\n\nBODY.RESULTS.SUBGROUP ANALYSES ACCORDING TO LITERACY:\nTwenty-four patients in each group had inadequate literacy based on the S-TOFHLA (Table 2). Among these patients, there was no difference in quality of life score in unadjusted and adjusted analyses (difference = -1.6; 95% CI -15, 12); p = 0.81). For the combined outcome of hospitalization or death, the unadjusted IRR was 0.77 (95% CI 0.30, 1.94). After adjusting for baseline differences, the IRR was 0.39 (95% CI 0.16, 0.91). Seventy-five patients had marginal or adequate literacy based on the S-TOFHLA. We found no difference in quality of life score in unadjusted and adjusted analyses (difference = -4.2; 95% CI -14, 6; p = 0.40). Among patients in the higher literacy group, the unadjusted IRR for hospitalization or death was 0.65 (95% CI 0.33, 1.3). After adjusting for baseline differences, the IRR was 0.56 (95% CI 0.30, 1.04). We did not find a statistically significant effect modification between literacy and the intervention.\n\nBODY.DISCUSSION:\nA heart failure self-management program designed for patients with low literacy reduced the rate of the combined endpoint of hospitalization or death. The prespecified subgroup analyses suggest that patients with low literacy benefited as much from the intervention as the patients with higher literacy. The success of our intervention reflects the goals of our program. We designed an easy-to-read and use educational booklet and self-management plan, and focused on overcoming barriers to learning self-management [15]. Our intervention was founded on teaching self-management. We focused on helping patients understand signs and symptoms of worsening heart failure and perform self-adjustment of diuretics based on weight fluctuation. Many care providers would not attempt to teach patients, particularly those with low literacy, how to self-adjust their diuretic medication. We found that, with careful teaching, many patients incorporated this strategy into their daily routine successfully, as demonstrated by improved self-care behaviors. Teaching self-adjustment of diuretics, rather than the conventional teaching to call the care provider if weight fluctuates, empowers patients to take more control over their illness. Self-adjustment of diuretic dose is a prominent aspect of the self-management training we provided to the intervention patients. Other programs to improve patient self-management have not been explicit in teaching patients to self-adjust their diuretic dose based on weight fluctuation. Although our outcomes are comparable to others', using this approach puts more control into the hands of the patient. Furthermore, our intervention appears effective among patients with low literacy skills, a group often overlooked for empowering interventions. Our study adds to the growing literature on disease management programs for patients with heart failure [6], particularly those that focus on self-management training [7-10]. Studies focusing on self-management training have demonstrated comparable improvements in hospitalization rates to more comprehensive programs that aim to improve the quality of pharmaceutical prescribing, provide home visits, and take place in specialized cardiology clinics [6]. Such comprehensive programs have also been shown to reduce mortality, but self-management programs have not [6]. We did not detect any difference in heart failure related quality of life which was the outcome we powered our study to detect. Other self-management studies that have found improved quality of life have enrolled patients during a heart failure hospitalization [8,9]; however, we enrolled patients in the outpatient setting while they were clinically stable. Improving quality of life for stable outpatients may be more difficult for this type of intervention. We have previously reported that patients with diabetes and low literacy benefited more from a disease management intervention than those with higher literacy skills [26]. A similar result in two different chronic diseases substantiates the claim that appropriately designed disease management programs may have greater effect for low literacy or vulnerable populations, who are most at risk for adverse outcomes with usual care. This finding is particularly important in light of the recent study by DeBusk and colleagues that did not find a reduction in hospitalization with a well-designed comprehensive intervention [13]. The authors and an accompanying editorial [14] suggested that the failure to detect improvement may have occurred because the patients studied were less at-risk than in other studies. They called for more research to determine better ways of targeting disease management. We believe that low literacy is an important marker for vulnerability to adverse outcomes, and that disease management programs targeted to patients with low literacy may be an effective way of focusing resources on those most able to benefit. If patients with low literacy are to be preferentially recruited for such programs, innovative outreach and screening efforts will likely be required, as patients with low literacy may face particular barriers to accessing such care. This study should be interpreted in light of its limitations. Research assistants were not blind to group assignment during the assessment of self-reported outcomes. As such, patients in the intervention may have been more likely to inflate their responses in an effort to please the interviewer. This effect would tend to inflate patient responses to the subjective assessments of heart failure-related quality of life, self-efficacy, and self-care behaviors. The MLHF questionnaire was modified from its original form to make it easier for patients with low literacy to respond. This change in the scale may have changed its ability to detect important changes in heart failure related quality of life. Because the groups' mean scores were almost identical, we do not feel this limitation changed our results. In a similar vein, most questionnaires are not validated in low literacy populations, raising questions as to their ability to perform to the same standards. Our sample size was small, which did not allow for an even distribution of baseline variables among the groups. We controlled for baseline differences between groups in our analysis. While it is controversial whether or not to control for baseline differences in randomized controlled trials, some analysts have argued that doing so improves the power without introducing bias [24]. A larger, multi-site study would offer better control of confounders, better generalizability, and more power to determine differences in effect according to literacy. We did not collect data on the resources needed to implement this type of intervention in usual settings, and such a study and cost-effectiveness analysis would be helpful for most interventions of this type. We used health educators, not nurses or physicians, to deliver the intervention. By using less highly trained individuals to deliver the intervention, we enabled nurses and physicians to perform clinical tasks more commensurate with their training. Other studies that have performed global cost estimates have found that the savings from reductions in hospitalizations exceed the cost of the intervention [6].\n\nBODY.CONCLUSION:\nIn conclusion, our heart failure self-management program, designed for patients of all literacy levels, appears to reduce rates of hospitalization and death. Patients with low literacy, and other vulnerable patients, may stand to benefit most from these programs. Further research into the design, implementation, and dissemination of disease management programs for low literacy patients will be crucial for meeting the health care needs of the growing population of patients with chronic illness.\n\nBODY.COMPETING INTERESTS:\nDrs. DeWalt and Pignone have received honoraria and grants from Pfizer, Inc., Dr. Rothman has received grants from Pfizer, Inc., and Dr. Sueta is in their speakers bureau.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nDD conceived of the study, participated in its design and coordination, performed statistical analyses, interpretation of the data, and drafted the manuscript. RM, MB conceived of the study and participated in its coordination. MK, KC coordinated the study, and collected the data, RR, CS participated in study design and interpretation of the data. MP conceived of the study, participated in its design and coordination, and interpretation of the data. All authors reviewed the manuscript for important intellectual content and gave final approval.\n\nBODY.PRE-PUBLICATION HISTORY:\nThe pre-publication history for this paper can be accessed here:\n\n**Question:** Compared to Standard information about self-care what was the result of Follow-up and thorough education on self-care on Self-efficacy?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
477
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Improved outcomes from the administration of progesterone for patients with acute severe traumatic brain injury: a randomized controlled trial\n\n ABSTRACT.BACKGROUND:\nSevere traumatic brain injury (TBI) has been increasing with greater incidence of injuries from traffic or sporting accidents. Although there are a number of animal models of TBI using progesterone for head injury, the effects of progesterone on neurologic outcome of acute TBI patients remain unclear. The aim of the present clinical study was to assess the longer-term efficacy of progesterone on the improvement in neurologic outcome of patients with acute severe TBI.\n\nABSTRACT.METHODS:\nA total of 159 patients who arrived within 8 hours of injury with a Glasgow Coma Score ≤ 8 were enrolled in the study. A prospective, randomized, placebo-controlled trial of progesterone was conducted in the Neurotrauma Center of our teaching hospital. The patients were randomized to receive either progesterone or placebo. The primary endpoint was the Glasgow Outcome Scale score 3 months after brain injury. Secondary efficacy endpoints included the modified Functional Independence Measure score and mortality. In a follow-up protocol at 6 months, the Glasgow Outcome Scale and the modified Functional Independence Measure scores were again determined.\n\nABSTRACT.RESULTS:\nOf the 159 patients randomized, 82 received progesterone and 77 received placebo. The demographic characteristics, the mechanism of injury, and the time of treatment were compared for the two groups. After 3 months and 6 months of treatment, the dichotomized Glasgow Outcome Scale score analysis exhibited more favorable outcomes among the patients who were given progesterone compared with the control individuals (P = 0.034 and P = 0.048, respectively). The modified Functional Independence Measure scores in the progesterone group were higher than those in the placebo group at both 3-month and 6-month follow-up (P < 0.05 and P < 0.01). The mortality rate of the progesterone group was significantly lower than that of the placebo group at 6-month follow-up (P < 0.05). The mean intracranial pressure values 72 hours and 7 days after injury were lower in the progesterone group than in the placebo group, but there was no statistical significance between the two groups (P > 0.05). Instances of complications and adverse events associated with the administration of progesterone were not found.\n\nABSTRACT.CONCLUSION:\nOur data suggest that acute severe TBI patients with administration of progesterone hold improved neurologic outcomes for up to 6 months. These results provide information important for further large and multicenter clinical trials on progesterone as a promising neuroprotective drug.\n\nABSTRACT.TRIAL REGISTRATION:\nACTRN12607000545460.\n\nBODY.INTRODUCTION:\nTraumatic brain injury (TBI) remains one of the leading causes of injury-related death and severe disability. The management of TBI currently includes preventing further neurological insults, managing the intracranial pressure (ICP), and surgical procedures. It is very important to search for clinically effective neuroprotective drugs to prevent secondary brain injury after TBI. In spite of many neuroprotective agents showing efficacy in experimental models of TBI, none has produced significant neuronal protection when tested in clinical trials [1,2]. Progesterone, a hormone, has steroidal, neuroactive and neurosteroidal action in the center neuronal system. Neuroprotective effects of progesterone have recently been shown in a variety of animal models, including ischemic and traumatic brain insult models [3-6]. Postinjury administration of progesterone in experimental models of head injury confers significant protection against TBI-induced cerebral edema and secondary neuronal death, promoting behavioral recovery [7,8]. Experimental evidence suggests that postinjury treatment with progesterone decreases brain edema, attenuates free radical damage, and reduces neuronal loss in TBI animal models [8-13]. Progesterone also reduces the inflammatory response and attenuates neurological abnormalities after ischemia and spinal cord injury [14-18]. In a recently published controlled study of progesterone, Wright and colleagues conducted a phase II, randomized, double-blind, placebo-controlled trial to assess the safety and benefit of administering progesterone to patients with acute TBI [19]. No serious adverse events were found in the 77 patients who received progesterone, and the patients with moderate TBI who received progesterone were more likely to have a moderate to good outcome than those were randomized to placebo at 30 days post injury. The 30-day mortality in the progesterone group was less than one-half that of the control group. This outcome suggests that progesterone causes no harms and may be a beneficial treatment for TBI [19]. Despite these potential advantages and the good safety profile of progesterone described in studies utilizing animals or humans as subjects, there is relatively little information available from assessing neuroprotective properties of progesterone in the patients with acute severe brain trauma. The effects of progesterone on neurological outcome of the TBI patients remain unclear. The purpose of the present pilot clinical study was to assess the longer-term efficacy of progesterone on improving the neurological outcome of patients with acute severe TBI.\n\nBODY.MATERIALS AND METHODS.PATIENTS:\nPatients with acute severe TBI and a Glasgow Coma Scale (GCS) score ≤ 8 after resuscitation and stabilization were entered into the study. Two hundred and thirty patients from the Neurotrauma Center of our teaching hospital were included. Male or female patients between the ages of 18 and 65 years were studied. The patients received progesterone within 8 hours after the documented time of injury. All patients admitted to the Neurotrauma Center, Clinical Medical College of Hangzhou between March 2004 and February 2007 were consecutively eligible for enrollment. We excluded patients who had received any investigational drugs 30 days prior to the enrollment, such as progesterone, estrogen and investigational compound, patients with severe anoxic intracerebral damage or brain death, and patients whose clinical condition was unstable (partial pressure of oxygen < 60 mmHg or a systolic blood pressure < 90 mmHg, or both). We also excluded pregnant patients and lactating female patients, and those for whom there was doubt whether the neurological status resulted from head trauma or acute or chronic spinal cord injury. The study was conducted in compliance with the clinical protocol approved by the Institutional Review Board and the ethical committees of Clinical Medical College of Hangzhou, according to Good Clinical Practice standards. Because of the nature of patients' injuries, subjects in this clinical study were incapable of granting informed consent. Investigators therefore obtained informed consent from the subject's legal guardian or health proxy before administering the drug. Given the urgent circumstances, we were unable to obtain permission from a legal guardian or health proxy within the stipulated time window for some patients (n = 53). Investigators therefore sought approval from the Institutional Review Board to use deferred consent. If the Institutional Review Board determined that these regulatory criteria were satisfied, the investigators were permitted to enroll subjects without consent. When the drug was administered without proxy consent, the Institutional Review Board was notified within 2 working days. We continued to try to contact the proxy consent after drug administration, and documented those attempts to the Institutional Review Board. Once contacted, the family or legally authorized representative was notified of the patient's enrollment and asked to provide written approval for the patient's continued participation. If attempts to contact proxy consent were unsuccessful, or if the patient died before the family could be contacted, we notified the Institutional Review Board and placed a full report in the patient's record and study file.\n\nBODY.MATERIALS AND METHODS.STANDARD CLINICAL MANAGEMENT:\nAfter head computerized tomography scanning, the patients were delivered to the neurosurgical intensive care unit of the teaching hospital immediately or following surgical evacuation of an intracranial hematoma. All patients received the standard treatment for management of severe TBI based on the guidelines for the management of severe head injury of the American Association of Neurologic Surgeons [20]. Particular emphasis was placed on the prevention and treatment of secondary insults, the avoidance of intracranial hypertension, maintenance of a normovolemic state as well as normothermia and normoglycemia, with ventilation to maintain the oxygen pressure at a minimum of 100 mmHg and the carbon dioxide pressure at approximately 35 mmHg.\n\nBODY.MATERIALS AND METHODS.RANDOMIZATION AND MEDICATION ADMINISTRATION:\nThe prospective, randomized, placebo-controlled, double-blind study was conducted in our neurosurgical intensive care unit. Subjects enrolled in the study were randomized to receive either progesterone (Tianjing Jinyao Pharmaceutical Co. Ltd, Tianjing, China) or matching placebo within 8 hours of the documented time of injury. Qualifying patients were randomly assigned in a 1:1 manner to receive the matching treatment with random numbers. Patients for the treatment group were given progesterone at 1.0 mg/kg via intramuscular injection and then once per 12 hours for 5 consecutive days. A single-dosage volume equivalent to 0.05 ml/kg was used in each subject. In a double-blind manner, progesterone and placebo were supplied via identical-looking solutions in identical glass vials with or without progesterone. The appearance, packaging and administration of placebo and progesterone injections were the same for the two groups. All patients, treating physicians, nursing staff, and pharmacists were blinded throughout the study period.\n\nBODY.MATERIALS AND METHODS.CLINICAL MEASUREMENTS:\nThe ICP was monitored continuously using ICP monitoring apparatus (CAMINO. MPM-1; Integra Co., San Diego, CA, USA). A computerized tomography scan was obtained in all patients at admission and this was categorized according to the modified Marshall computerized tomography scan classification: I, intracranial pathology not visible on the computerized tomography scan; II, cisterns present with shift ≤ 5 mm; lesions present, but no high-density or mixed-density lesions > 25 cm3, with bone fragments and foreign bodies; III, cisterns compressed or absent, shift ≤ 5 mm, with no high-density or mixed-density lesions > 25 cm3; IV, shift > 5 mm, with no high-density or mixed-density lesions >25 cm3; V, any surgically evacuated lesion; and VI, high-density or mixed-density lesions >25 cm3 without surgical evacuation. The patient's condition – body temperature, heart rate and respiratory rate, blood pressure, and pulse blood oxygen saturation – was monitored continuously at the bedside with monitoring apparatus (Hewlett-Packard, Palo Alto, CA, USA). Daily evaluations of neurologic status over the initial 14-day period were performed via the GCS score, adverse experiences, surgical procedures, and intracranial complications. Intake and output of fluids were also recorded. Laboratory tests including hematology, the coagulation profile and clinical chemistry were performed daily and then for 1 week after injury. A urine pregnancy test was performed at enrollment for female patients (as necessary).\n\nBODY.MATERIALS AND METHODS.NEUROLOGIC OUTCOME MEASUREMENTS:\nThe neurologic outcome was evaluated according to the Glasgow Outcome Scale (GOS) score, which contains five levels of outcome: good recovery, moderate disability, severe disability, vegetative survival, or death. For statistical analysis, GOS scores were dichotomized into favorable or unfavorable outcomes. Patients in the upper two GOS outcome groups (good recovery and moderate disability) were considered of favorable outcome, and patients in the other groups (severe disability, vegetative state, or death) were considered of unfavorable outcome. Secondary efficacy endpoints were the modified Functional Independence Measure (FIM) score and mortality. Based on previous reports [21,22], the modified FIM measurements of disability in three areas of activity (domains of self-care, motor function, and cognitive function) were chosen from the 18 items in the full FIM. Each of three items (expression, feeding, and locomotion) includes four possible levels of function ranging from total dependence (1) to independence (4). The total modified FIM scores therefore ranged from 3 to 12. The patients were assessed using the same measures both at 3 and 6 months in the follow-up protocol.\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nDescriptive statistics, including proportions, means and standard deviations, were compiled for all demographic and outcome measures. Demographic and clinical data were analyzed using Fisher's exact test. The statistical analyses were conducted to assess the differences between the treatment group and the control group on specific variables. Statistical analysis was performed using contingency analysis (chi-squared) for categorical data and Student's t test for continuous data. P < 0.05 was considered statistically significant. SPSS 11.0 software package (SPSS Inc., Chicago IL, USA) was used for statistical analysis.\n\nBODY.RESULTS.PATIENTS:\nBetween March 2004 and February 2007, a total of 230 patients were screened in the present study. Of these, 159 patients meeting the protocol stipulation and condition were recruited and randomized to receive either progesterone (n = 82) or placebo (n = 77). Data were available for 154 patients (96%) at the 3-month follow-up and for 135 patients (84%) at the 6-month follow-up. Nineteen patients (11%) were lost to follow-up, three patients (1%) refused follow-up, and two patients (1%) withdrew from the trial. No subjects were enrolled in violation of the protocol stipulations (Figure 1). Figure 1Trial profile. The demographics of the progesterone and placebo groups are presented in Table 1. The cohorts were well balanced with no significant differences between the two groups. The medication history of patients, medication administration, and medical procedures were not significantly different among treatment groups. Table 1 Clinical and demographic characteristics between the two groups Admission characteristic Placebo (n = 77) Progesterone (n = 82) P value Males 57 (74) 58 (70) 0.64 Females 20 (25) 24 (29) 0.64 Mean (standard deviation) age (years) 31 (9) 30 (11) 0.52 Mean (standard deviation) time injury to administration (hours) 3.65 (1.46) 3.80 (2.03) 0.59 Mean (standard deviation) qualifying Glasgow Coma Scale score 6.1 (1.3) 6.0 (1.8) 0.68 Glasgow Coma Scale 3 to 5 20 (25) 22 (26) 0.90 Mechanism of injury Motor vehicle 62 (80) 63 (76) 0.57 Fall 8 (10) 10 (12) 0.71 Assault 4 (5) 7 (8) 0.40 Other 3 (3) 2 (2) 0.59 Surgical procedures 24 (31) 22 (26) 0.54 Pupillary response Bilaterally normal 26 (33) 26 (31) 0.78 Abnormal 51 (66) 56 (68) 0.78 Marshall computerized tomography scan classification I 0 0 II 9 (11) 7 (8) 0.50 III 22 (28) 28 (34) 0.44 IV 13 (16) 12 (14) 0.69 V 24 (31) 22 (26) 0.54 VI 9 (11) 13 (15) 0.44 Data presented as n (%) unless indicated otherwise. \n\nBODY.RESULTS.GLASGOW OUTCOME SCALE SCORES:\nThe 3-month and 6-month GOS scores between the progesterone and placebo groups are summarized in Table 2. There was a better recovery rate for the patients who were given progesterone than for those in the control group at 3-month follow-up (P = 0.044). A dichotomized analysis revealed significant differences in neurologic outcome between the treatment and control groups (Figure 2). The analysis using the dichotomization of GOS scores at 3 months post injury revealed a favorable outcome in 47% of the patients receiving progesterone and in 31% of the placebo group (P = 0.034). There was an unfavorable outcome in 53% of the patients receiving progesterone and in 70% of the placebo group (P = 0.022). At 6-month follow-up, the dichotomized GOS scores also showed a significant statistical difference between the two groups, similar to those 3 months after injury. The percentage of favorable outcome was 58% for the patients who were given progesterone and was 42% in the placebo group (P = 0.048). Forty-one percent of patients who were given progesterone and 57% of the placebo group exhibited an unfavorable outcome (P = 0.048). Table 2 Comparison of Glasgow Outcome Scale scores between the progesterone and placebo groups patients at 3-month and 6-month follow-up Glasgow Outcome Scale scores Progesterone (n = 82) Placebo (n = 77) 3 months Good recovery 21 (25) 10 (12) Moderate disability 18 (21) 14 (18) Severe disability 16 (19) 13 (16) Vegetative survival 13 (15) 16 (20) Death 15 (18) 25 (32) 6 months Good recovery 26 (31) 19 (24) Moderate disability 22 (26) 14 (18) Severe disability 9 (10) 11 (14) Vegetative survival 10 (12) 8 (10) Death 15 (18) 25 (32) Data presented as n (%). Figure 2Dichotomized Glasgow Outcome Scale scores for patients receiving either progesterone or placebo. There was a remarkably more favorable outcome among patients who were given progesterone compared with patients receiving placebo (P = 0.034) 3 months postinjury. At 6-month follow-up, the significant difference in the dichotomization of Glasgow Outcome Scale scores between the progesterone and placebo groups was similar to that after three-month injury (P = 0.048). Subgroup analysis for women also showed a significant difference in the percentage of favorable outcome between the two groups at 6-month follow-up (35% in the placebo group and 66% in the progesterone group, P = 0.036). The patients who were given progesterone in the group with GCS of 6 to 8 showed a more favorable outcome (43%) compared with the placebo group (28%) at 6-month follow-up (P = 0.044). There was no significant difference, however, in dichotomized outcomes in the group with GCS of 3 to 5 (P > 0.05).\n\nBODY.RESULTS.MODIFIED FUNCTIONAL INDEPENDENCE MEASURE SCORES:\nFigure 3 shows the modified FIM scores at 3-month and 6-month follow-up. There was a significant difference in the mean modified FIM score between two groups both at 3-month and 6-month follow-up. At the 3-month follow-up, the scores were 7.35 ± 1.89 for the placebo group and 8.02 ± 1.73 for the progesterone group (P < 0.05). Six months after injury, the placebo group showed a score of 8.95 ± 1.05 and the progesterone group presented 9.87 ± 1.17 (P < 0.01), suggesting good functional outcome in the patients treated with progesterone. Figure 3Modified Functional Independence Measure scores for patients receiving either progesterone or placebo. Modified Functional Independence Measure (FIM) scores at 3-month and 6-month follow-up from patients receiving either progesterone or placebo show that the scores in the progesterone group were significantly higher than those in the placebo group at both 3-month and 6-month follow-up. Data expressed as the mean ± standard deviation. Different from the placebo group: *P < 0.05, **P < 0.01.\n\nBODY.RESULTS.MORTALITY:\nDuring the 6 months of follow-up, a total of 40 patients (25%) died in the present study (37 patients died during their hospital stay). Seventy percent of deaths occurred within 1 week after trauma. Mortality was attributed to the heavy head injury in each case. The mortality rate in the progesterone treatment group was significantly lower at 6-month follow-up compared with the placebo group (18% versus 32%, P = 0.039).\n\nBODY.RESULTS.INTRACRANIAL PRESSURE:\nFigure 4 shows the ICP in the progesterone group patients and in the placebo group patients at 24 hours, 72 hours and 7 days after injury. The ICP was monitored continuously for 75 patients (47%), 40 in the progesterone group and 35 in the placebo group. The mean ICP shows no apparent difference at 24 hours after trauma between the two groups (progesterone group, 22.1 ± 4.3 mmHg versus placebo group, 23.2 ± 4.6 mmHg; P = 0.121). At 72 hours and 7 days after injury, the mean ICP of patients who were given progesterone was slightly lower than those of patients who received placebo, but the differences were not statistically significant (16.9 ± 3.8 mmHg and 14.8 ± 3.8 mmHg for progesterone-treated patients versus 18.2 ± 5.1 mmHg and 15.9 ± 4.1 mmHg for placebo-treated patients, respectively; P > 0.05). Figure 4Comparison of intracranial pressure between patients receiving either progesterone or placebo. The mean intracranial pressure between the progesterone and placebo group patients shows no significant differences 24 hours, 72 hours and 7 days after injury between the two groups (P > 0.05). Data expressed as the mean ± standard deviation.\n\nBODY.RESULTS.GLASGOW COMA SCALE SCORES AND CLINICAL MEASUREMENTS:\nThe mean GCS scores increased progressively in the two groups during the 14-day acute phase of the study, with no apparent differences among the treatment groups. Meanwhile, there was no obvious difference in average body temperature, heart and respiratory rates, blood pressure, pulse blood oxygen saturation, and laboratory testing between the progesterone and placebo groups.\n\nBODY.RESULTS.COMPLICATIONS AND ADVERSE EVENTS:\nProgesterone was well tolerated in the treated patients with acute severe TBI. No complication and adverse event associated with the administration of progesterone was found in this clinical study during the hospitalization periods.\n\nBODY.DISCUSSION:\nThe results of the present trial showed for the first time that progesterone administration had a longer-term efficacy on clinical outcomes in acute TBI patients. A significant increase in the proportion of patients with a favorable outcome in the progesterone group compared with the placebo group up to 6 months indicates the possibility of progesterone for treatment of acute TBI. Moreover, there were more surviving TBI patients in the treatment group than in the control group. Our results suggest the efficacy of progesterone in the treatment of acute severe TBI. Previous reports showed the evidence of efficacy in TBI animal models [8-14]. In the present study, the efficacy and safety of progesterone was confirmed in patients with acute severe TBI. Furthermore, our results using the modified FIM and GOS scores showed that progesterone administration remarkably enhanced functional recovery 6 months after injury and reduced the mortality of the patients with acute severe TBI (GCS = 6 to 8), although there was no statistical significance in the outcome improvement for GCS = 3 to 5 patients with and without progesterone treatment. The evidence of improved outcome for women patients also suggested, in part, a beneficial efficacy and feasibility of progesterone in women with TBI, in spite of the limited number of female patients in the trial. It is recognized that the pathophysiology of TBI is a multifactorial process involved in a complex and interwoven series of pathologic process following the onset of insult, such as increased extracellular glutamate concentrations, increased intracellular Ca2+, free radical overproduction and exacerbated inflammatory response. Medication targeted at a pathological single injury factor could therefore not sufficiently recover functional deficits following TBI. The ideal drugs should be able to block multiple cellular events leading to brain damage following TBI. Neuroprotective strategies currently focus on acting on only one of the mechanisms. Some efforts have been made, however, to combine agents or interventions to increase the probability of success in this setting [23,24]. Nevertheless, the use of a single pharmacologic agent or procedure to slow or block damaging chemicals that are released after brain injury is highly desirable. Progesterone has several features that make it an attractive potential drug candidate for TBI. First, progesterone could protect against brain damage via multiple mechanisms[13,15-18]. The pharmacokinetics of progesterone and its pattern of adverse reactions are well known since the drug has been safely used for a long time [25,26]. Second, with a wide therapeutic window of progesterone, a single bolus given up to 24 hours post injury may significantly reduce cerebral edema [7]. Third, progesterone may rapidly cross the blood–brain barrier and reach equilibrium with the plasma within 1 hour of administration [27-29]. Administration of progesterone soon after TBI would probably benefit the recovery of the patient. In the present double-blind trial, progesterone or placebo was dissolved in the same camellia oil and taken daily for 5 days by patients with acute TBI. Those patients administered progesterone experienced significant improvements in functioning outcome, indicating neuroprotective properties of progesterone in acute severe TBI. There was no adverse event after administration of progesterone and no further late toxicity up to 6 months in the trial. Goss and colleagues suggested that low and moderate doses of progesterone (4 to 8 mg/kg) were optimal for facilitating recovery of select behaviors, and that postinjury progesterone treatment permitted a wider dose range than preinjury treatment in rats with medial frontal cortical contusions [30]. In addition, 5 days of postinjury progesterone treatment are needed to reduce significantly the neuropathological and behavioral abnormalities found in a rodent model of TBI [13]. Wright and colleagues used intravenous progesterone at a dose of 0.71 mg/kg, followed by 0.5 mg/kg progesterone per 12 hours during the 3 following days, which appeared safe in the treatment of TBI patients [19]. In our study, the patients were received a single intramuscular injection of 1.0 mg/kg progesterone and the same dose per 12 hours for 5 consecutive days. The results in our trial showed that single higher-dose progesterone as protective therapy did not lead to any serious side effects. No obvious symptoms of hormone reaction were observed in our study. Accordingly, it can be anticipated that progesterone may be a promising treatment for severe TBI patients as it is inexpensive, widely available and has a long track record of safe use in humans to treat other diseases. The data in the present study provide very encouraging and favorable conditions that could lead to the assessment of GOS and FIM scores in TBI patients in a clinical trial. The GOS score, although useful, provides only a global assessment of function and dependence; it may not differentiate specific difference in cognitive function, motor function, or daily activities. The modified FIM score selects only three items from the 18-item score, and also distinguishes only four (as opposed to seven) levels of function. Subtle or complex deficiencies, particularly in cognitive function, may not have been identified in the dataset. A deficiency in using any one scale to measure outcome is that it is limited in its scope of measurement. The present clinical study was therefore designed to evaluated functional outcome according to the GOS and the modified FIM score. Intracranial hypertension has been considered an important factor affecting the outcome of the patients with acute severe TBI. Progesterone administration showed to decrease cerebral edema [9]. In an experimental study with male rats, there was a linear correlation between the serum progesterone level and brain edema after experimental TBI. The higher the serum progesterone level, the lower the cerebral edema [31]. In the current trial, however, no statistically significant difference was found in ICP monitoring between the groups given progesterone or placebo. It seems that progesterone treatment has little effect on directly reducing the ICP of patients with acute severe TBI. As a result of randomization, all of these parameters were homogeneous between the progesterone and placebo groups in our clinical trial. Nevertheless, some limitations are observed in the current study. The results could be influenced by a single-center trial and local perioperative standard of care. Therefore, it is necessary to use a sufficient power to assess progesterone's effects on neurologic outcomes. Our result of the significant differences in outcomes between two groups of patients emphasizes the potential value of using GOS and FIM to tailor progesterone administration and the likelihood of observing similar differences in a larger patient population; however, the possibility exists that a statistical error may have occurred because of an inadequate sample size. Further studies are needed to determine the mechanisms of action underlying the neurologic effect observed.\n\nBODY.CONCLUSION:\nThe present pilot study indicated that the use of progesterone may significantly improve neurologic outcome of patients suffering severe TBI up to 6 months after injury, providing a potential benefit to the treatment of acute severe TBI patients. Our results strongly support further large, multicenter clinical trials to examine the ways in which progesterone is achieving the profound neurologic effect and to decipher optimal conditions in which it can be used to lengthen the duration of and improve the degree of neuroprotection.\n\nBODY.KEY MESSAGES:\n• Many neuroprotective agents have been shown to be efficient on TBI in animal models, and there is no single agent that shows improvement in outcome for head injury patients. • A number of experimental models have suggested that administration of progesterone has a potential benefit for head injury. • The present clinical trial reveals that progesterone may be used as a potential safe drug for the treatment of acute severe head trauma patients.\n\nBODY.ABBREVIATIONS:\nFIM = Functional Independence Measure; GCS = Glasgow Coma Scale; GOS = Glasgow Outcome Scale; ICP = intracranial pressure; TBI = traumatic brain injury.\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nGMX and WQY participated in the trial design and were involved in the study analysis and summary. GMX and WMW obtained the data. GMX, JW, ZHL and WMW participated in the data analysis and interpretation of the results. All authors reviewed the final version.\n\n**Question:** Compared to placebo what was the result of progesterone on The mean intracranial pressure values 72 hours and 7 days after injury?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
281
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomised phase II evaluation of irinotecan plus high-dose 5-fluorouracil and leucovorin (ILF) \n\n ABSTRACT:\nAn open-label randomised comparison of efficacy and tolerability of irinotecan plus high-dose 5-fluorouracil (5-FU) and leucovorin (LV) (ILF) with etoposide plus 5-FU/LV (ELF) in patients with untreated metastatic or locally advanced gastric cancer. One cycle of ILF comprised six once-weekly infusions of irinotecan 80 mg m−2, LV 500 mg m−2, 24-h 5-FU 2000 mg m−2, and ELF comprised three once-daily doses of etoposide 120 mg m−2, LV 300 mg m−2, 5-FU 500 mg m−2. In all, 56 patients received ILF and 58 ELF. Median age was 62 years, Karnofsky performance 90%, and disease status was comparable for both arms. The objective clinical response rates after 14 weeks treatment (primary end point) were 30% for ILF and 17% for ELF (risk ratio (RR) 0.57, 95% confidence interval (CI) 0.29–1.13, P=0.0766). Overall response rates over the entire treatment period for ILF and ELF were 43 and 24%, respectively (RR 0.56, 95% CI 0.33–0.97; P=0.0467). For ILF and ELF, respectively, median progression-free survival was 4.5 vs 2.3 months, time to treatment failure was 3.6 vs 2.2 months (P=0.4542), and overall survival was 10.8 vs 8.3 months (P=0.2818). Both regimens were well tolerated, the main grade 3/4 toxicities being diarrhoea (18%, ILF) and neutropenia (57%, ELF). The data from this randomised phase II study indicate that ILF provides a better response rate than ELF, and that ILF should be investigated further for the treatment of metastatic gastric cancer.\n\nBODY:\nGastric cancer is the fourth most common cancer in Europe and the third leading cause of cancer mortality (Bray et al, 2002). Although gastric cancer has declined over the past 50 years, the incidence of tumours at the gastro-oesophageal junction has increased (Blot et al, 1991). The use of chemotherapy for the management of patients with advanced gastric cancer, who have limited treatment options (Hohenberger and Gretschel, 2003), has only become widely acceptable over the last 20 years (Glimelius et al, 1997; Murad et al, 1993; Pyrhonen et al, 1995). 5-Fluorouracil (5-FU), usually in combination with leucovorin (LV, also referred to as folinic acid), forms the basis of most chemotherapy regimens used for the treatment of gastric cancer. A randomised phase III trial compared combinations of 5-FU with other active drugs in advanced gastric cancer: etoposide, LV and 5-FU (ELF) vs infusional 5-FU plus cisplatin (FUP) vs 5-FU, doxorubicin and methotrexate (FAMTX) (Vanhoefer et al, 2000). The overall response rates (ORRs) ranged from 9% (ELF) to 20% (FUP) and median survival times were between 6.7 months (FAMTX) and 7.2 months (both ELF and FUP). The observed differences were not statistically significant and there is still no definitive regimen for the treatment of gastric cancer. The combination of epirubicin, cisplatin and continuous infusion 5-FU (ECF) has been proposed as a standard first-line therapy for gastric cancer as a consequence of its significantly improved response rate (46%) and survival (8.7 months) when compared with FAMTX (Waters et al, 1999). More recently, ELF has been shown to provide better disease control (complete response (CR)+partial response (PR)+stable disease (SD)) for patients with proximal rather than distal tumours (85 vs 48%, P=0.04) (Schulze-Bergkamen et al, 2002). Tolerability, toxicity and ease of administration have become major determinants for selecting an appropriate therapy and ELF has emerged as a convenient, well-tolerated regimen that can be administered on an outpatient basis (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002). Irinotecan (CPT-11, Camptosar; Pfizer Oncology, New York, USA) inhibits topoisomerase I thereby disrupting DNA replication and cell division within tumour cells. Response rates between 20 and 23% have been reported for irinotecan monotherapy in untreated gastric cancer (Futatsuki et al, 1994; Kohne et al, 2003). In patients who had failed previous therapy, irinotecan (180 mg m−2) combined with 5-FU (400 mg m−2, bolus) and LV (125 mg m−2) followed by 5-FU infusion (1200 mg m−2 over 48 h) yielded a response rate of 29%, while a further 34% of patients achieved SD (Assersohn et al, 2004). Irinotecan with bolus-LV/5-FU in the first-line treatment of gastric cancer provided a response rate of 22% (Blanke et al, 2001). However, this regimen (irinotecan 125 mg m−2, LV 20 mg m−2 plus 5-FU 500 mg m−2, all given weekly for 4 weeks followed by a 2-week rest) was associated with a high incidence of severe diarrhoea (28%) and neutropenia (36%) infection leading to substantial dose modifications (Blanke et al, 2001). By comparison, the combination of irinotecan with continuous rather than bolus infusions of LV/5-FU exhibited a lower incidence of grade 3 and 4 toxicities in colorectal cancer patients (Douillard et al, 2000; Saltz et al, 2000; Bouzid et al, 2003). Therefore, we have investigated a weekly dose of irinotecan (80 mg m−2) in combination with LV (500 mg m−2) and continuous 5-FU (2000 mg m−2 over 24 h) according to the AIO (Arbeitsgemeinschaft Internistische Onkologie) regimen (i.e. ILF) in gastric cancer patients. In a previous phase I study of ILF in the first- and second-line treatment of gastric cancer, we observed a response rate of 20% with a further 36% of patients reporting SD (Moehler et al, 2003). Importantly, toxicity was sufficiently manageable to allow outpatient-based treatment. Therefore, we initiated the present randomised, controlled, phase II study to compare the efficacy and safety of ILF with ELF in the first-line treatment of metastatic gastric cancer.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nEligible patients had untreated histologically proven gastric adenocarcinoma, or adenocarcinoma of the oesophagogastric junction with measurable metastatic disease and/or locally recurrent nodal involvement, were aged between 18 and 75 years with a Karnofsky performance score (KPS) ⩾60 and a life expectancy >12 weeks. Patients were required to have adequate haematological (neutrophils ⩾2.0 × 109 l−1, platelets ⩾150 × 109 l−1; haemoglobin ⩾10g dl−1), hepatic (total bilirubin ⩽1.25 × upper normal limit (UNL); aspartate (AST) and alanine (ALT) aminotransferases ⩽3 × UNL) and renal function (creatinine <1.25 × UNL). Patients with previous cancer therapies were excluded from the study. All patients provided signed and dated consent before entering the trial. The study was conducted in accordance with the principles of the Declaration of Helsinki and Good Clinical Practice guidelines and the protocol was initially approved by the Ethics committee of Aerztekammer Rheinland-Pfalz and later by all Ethics committees responsible for participating centres.\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN AND RANDOMISATION:\nThis was an open-label, multicentre, phase II randomised trial with two treatment arms. Patients were randomly assigned and stratified according to centre, peritoneal involvement (yes/no) and prior gastrectomy (yes/no). The randomisation process was centralised and performed by the Coordination Centre for Clinical Trials (KKS), Mainz, Germany.\n\nBODY.PATIENTS AND METHODS.ADMINISTRATION OF STUDY DRUGS AND DOSE ADJUSTMENT:\nPatients assigned to ILF (Arm A) received irinotecan 80 mg m−2 intravenously (i.v.) over 60–90 min followed by LV 500 mg m−2 i.v. over 60 min and then 5-FU 2000 mg m−2 i.v. over 24 h, on day 1. Each cycle comprised six once-weekly treatments followed by a 13-day rest period. Systemic prophylactic atropine (0.25 mg) injections for irinotecan-related acute cholinergic symptoms were allowed for the first cycle but not recommended. Prophylactic treatment for delayed diarrhoea was not permitted. However, patients were carefully informed of the potential risk of delayed diarrhoea and neutropenia and the need for early intervention with loperamide, metoclopramide, antibiotics, or hospitalisation and parenteral rehydration in case of refractory diarrhoea (>48 h). Antiemetic treatment was performed using metoclopramide or HT-3 antagonists in a sequential manner. The prophylactic use of colony-stimulating factors was not permitted. Patients assigned to ELF (Arm B) received etoposide 120 mg m−2 i.v. over 60 min, LV 300 mg m−2 i.v. over 5–10 min and then 5-FU 500 mg m−2 bolus i.v. over 2–4 min, on day 1. Each cycle comprised three applications on consecutive days (1–3) followed by an 18-day rest. All study treatments were administered until disease progression, unacceptable toxicity or withdrawal of consent. In the event of toxicity (defined by the National Cancer Institute of Canada expanded common toxicity criteria; NCIC-CTC), treatment delays or dose reductions could be applied as follows. If at any time during a cycle there were moderate reductions in haematological function (neutrophil count 0.5–1.5 × 109 l−1, platelet count 25–75 × 109 l−1) or moderate diarrhoea or stomatitis (>grade 1), the next administration could be delayed for up to 2 weeks. If at any time haematological abnormalities were noted (neutrophils <0.5 × 109 l−1, neutrophils <1 × 109 l−1 with infection or fever, platelets <25 × 109 l−1) or if there were ⩾grade 3 or 4 diarrhoea or stomatitis, treatment had to be delayed until recovery to moderate levels (as described above) after which the following dose reductions were applied. For Arm A, 5-FU was reduced to 1600 mg m−2 and irinotecan to 65 mg m−2. For Arm B, in the case of haematological toxicity, etoposide had to be reduced to 100 mg m−2 and 5-FU to 400 mg m−2, and in the case of diarrhoea or stomatitis, 5-FU was reduced to 400 mg m−2. If a condition persisted despite dose reduction, or if a patient experienced myocardial infarction, treatment was terminated. In the case of hand–foot syndrome, the dose of 5-FU was to be reduced by 20%. Delayed diarrhoea was treated immediately with loperamide and rehydration and, if associated with severe neutropenia, a broad-spectrum antibiotic. Hospitalisation with i.v. rehydration was required for grade 4 or persistent (>48 h) diarrhoea, concomitant vomiting, fever, or KPS <60%.\n\nBODY.PATIENTS AND METHODS.STUDY EVALUATIONS:\nAt baseline up to five measurable lesions per organ and 10 lesions in total were to be identified as target lesions, measured using computed tomography (CT), and recorded according to the RECIST system (Response Evaluation Criteria In Solid Tumours; Therasse et al, 2000). The sum of the longest diameters for all target lesions was used as a reference for determining objective tumour response. Tumour responses were evaluated at week 7, week 14 and then every two cycles for patients receiving ILF or every four cycles for patients receiving ELF. Responses were determined according to RECIST as follows: complete response was defined as the disappearance of all target and nontarget lesions with no new lesions and confirmed by two observations at least 4 weeks apart; PR was defined as a reduction of 30% or more in the sums of the longest diameters of all measurable lesions relative to baseline with no new lesions; no change (NC) was defined as neither sufficient shrinkage to qualify for PR nor sufficient increase to qualify for progressive disease (PD) with no new lesions; and PD was defined as ⩾20% increase in the sum of the longest diameters, the occurrence of nontarget lesions (e.g. pleural effusion or ascites) or the appearance of brain metastases independently of performance at sites outside the brain. Safety and tolerability were assessed by regular clinical examinations and assessments of adverse events (weekly, at the end of treatment and at every 3 months of follow-up), disease symptoms, KPS, haematological and biochemical parameters.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS:\nThe primary end point was objective clinical response (CR+PR) based on an interim analysis following 14 weeks of treatment. The one-sided exact Fisher's test was used to compare the treatment arms at the significance level α=5%. The analyses were performed on an intention-to-treat basis including all patients who were treated in the study. The secondary end points were ORR (for the entire treatment period), time to progression, tumour growth control, time to treatment failure (including progression, death or withdrawal) and survival. Time to event data were described by Kaplan–Meier estimates and treatment groups were compared by log-rank test. Time to event data were further evaluated by appropriate proportional Cox's models and results were summarised by hazard ratio point and 95% confidence interval (CI) estimates, and P-values of Wald χ2 test. Binary data were described by risk ratio (RR) point and 95% CI estimates and treatment groups were compared by exact Fisher's test. Binary data were further evaluated by appropriate logistic regression models and were summarised by odds ratio point and 95% CI estimates and P-values of Wald χ2 test. If not specified otherwise, P-values are presented from two-sided tests and two-sided 95% CI are presented. All analyses were performed using SAS version 6.12.\n\nBODY.RESULTS.PATIENT CHARACTERISTICS:\nIn all, 120 patients from 17 centres in Germany were randomised into the study between November 2000 and April 2003. Two patients from Arm A and four from Arm B withdrew without receiving study treatment; therefore, the intention-to-treat population contained 114 patients (56 received ILF and 58 received ELF). The baseline characteristics were well balanced between the two treatment groups (Table 1). The median age of patients was 62 years and the median KPS was 90%. In around one-third (31%) of patients, the primary tumour site was the oesophagogastric junction, 62% of patients had liver metastases and in 77% two or more organs were involved.\n\nBODY.RESULTS.RESPONSE RATES:\nThe objective clinical response rates following 14 weeks of treatment (primary end point) were 30% for ILF compared with 17% for ELF (RR 0.57, 95% CI 0.29–1.13, P=0.0766). The ORRs for the entire treatment period and including all assessments prior to discontinuation were 43% (24 of the 56) for ILF and 24% (14 of the 58) for ELF (Table 2). The increased response rate provided by ILF compared with ELF was statistically significant (RR=0.56; 95% CI=0.33–0.97; P=0.0467.) The tumour control rates (CR+PR+NC) were 63% (35 of the 56) and 41% (24 of the 58), respectively. Logistic regression analysis indicated that a baseline KPS ⩽80% reduced the likelihood of a response by 59% compared with patients whose KPS was greater than 80% (P=0.038) (Table 3). After adjustment for KPS, peritoneal involvement and surgery for primary tumour, the regression model also demonstrated that ILF was 138% more likely to provide a response when compared with the ELF regimen (Table 3, P=0.042).\n\nBODY.RESULTS.PROGRESSION-FREE SURVIVAL, TREATMENT FAILURE AND OVERALL SURVIVAL:\nAt the last data cutoff, the median follow-up was 9.4 months in Arm A and 5.8 months in Arm B. At this time, 96 of the 114 patients had died. Disease progression was the major cause of death and accounted for 79% of patients in both treatment groups. One patient from the ILF arm did not comply with the provided recommendations for the treatment of prolonged grade 3 diarrhoea and consequently died (i.e. toxic death). One patient in the ELF arm died from a cardiovascular event. Compared with ELF, the ILF regimen extended median progression-free survival, median time to treatment failure and median overall survival (Table 4). However, when the treatment groups were compared by log-rank test, there was no significant difference between the two treatments for any of these parameters (e.g. the Kaplan–Meier survival plot as shown in Figure 1). Investigational analyses found that the risk of progression was increased in patients with a primary tumour in the oesophagogastric junction and in those with metastatic involvement in two or more organs (Table 3). As would be expected, the risk of death was increased in patients with a low KPS, in those with two or more involved organs and in those with peritoneal involvement who received ELF (Table 3).\n\nBODY.RESULTS.SAFETY:\nThe median number of cycles administered in the study was two for ILF (Arm A) and three for ELF (Arm B) (Table 5). Although the median treatment duration period was over twice as long with ILF than with ELF, there were more dose administration delays (70%) and dose reductions (75%) with ILF than with ELF (52 and 45%, respectively). The main reason for discontinuing study treatment was disease progression; 54% of patients receiving ILF and 72% receiving ELF. Although only one patient in each treatment group withdrew because of treatment-related toxicity, five patients receiving ILF and three patients receiving ELF either withdrew consent or refused further treatment. The incidence of grade 3/4 haematological toxicities was low in both treatment groups with the exception of neutropenia, which was reported by 57% of patients receiving ELF (Table 6). There were more grade 3/4 gastrointestinal toxic events with ILF, notably diarrhoea, which was reported by 18% of patients compared with no reports with ELF. Grade 3/4 alopecia was reported by a significant proportion of patients receiving ELF (28%), but was only seen in 5% of those receiving ILF.\n\nBODY.DISCUSSION:\nAlthough chemotherapy regimens offer at best a slight, albeit statistically significant, improvement in survival for patients with gastric cancer, they are associated with a degree of toxicity that limits their value as a palliative treatment (Vanhoefer et al, 2000; Schoffski, 2002; Diaz-Rubio, 2004). The primary end point of clinical response at 14 weeks was selected so that a statistical comparison at a fixed time point could be made. However, as it is the convention in such studies, patients were treated until progression, and could respond to treatment at a later point. Therefore, the overall response and survival rates obtained from the entire dosing period provide a more clinically significant assessment of the efficacy of these regimens for discussion in relation to other trials in gastric cancer. The irinotecan-based combination provided again a greater ORR than that seen with the commonly used ELF regimen (43 vs 24%, respectively, P=0.0467). Overall response rates for ELF reported in previous studies range from 9 to 23% (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002) and this compares well with the 24% response rate reported in this study. Accordingly, an ORR of nearly 50% for ILF, as seen in this study, is a substantial improvement and is in the range of previous reports of the use of this drug combination in this setting (Blanke et al, 2001; Moehler et al, 2003). The overall survival data in the present study also compare well with those from previous studies. The median overall survival with ELF has been reported at 7.2 and 8.0 months (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002), which is similar to both the 8.3 months reported here and the data reported for irinotecan-based regimens in the second-line setting between 7.0 and 7.6 months (Moehler et al, 2003; Assersohn et al, 2004). By comparison, there was a nonsignificant trend for increased median survival with ILF in this study (10.8 months) and this compares well with data reported for more recent exploratory combinations such as capecitabine and docetaxel (10.5 months), and epirubicin, docetaxel and cisplatin (11.0 months) (Lee et al, 2004; Park et al, 2004). The same can be said of the progression-free survival period in the ILF group of 4.5 months, which compared well with the 4.1 and 5.2 months reported recently for docetaxel-based regimens (Lee et al, 2004; Park et al, 2004). In other randomised phase II studies, continuous 5-FU/LV infusion plus irinotecan has also provided promising efficacy (ORRs of 40–42%, median progression-free survival periods of 6.5–6.9 months and median overall survival periods of 10.7–11.3 months; Bouche et al, 2004; Pozzo et al, 2004). Consequently, large phase III studies are being considered to investigate irinotecan in combination with continuous 5-FU/LV infusion regimens. When patient histories, disease status and other factors were examined for their effects on clinical outcome, those patients who were in better general health (good performance status, low tumour burden) were more likely to achieve a response and less likely to have a progression event or die, regardless of the treatment arm to which they were randomised. Patients with peritoneal involvement at presentation have a generally poorer prognosis and as a group face a desperate need for improved treatment options (Rau et al, 1996). The data from this study demonstrated that these patients are less likely to suffer a fatal event if treated with ILF rather than ELF. This is potentially an important observation for the management of these difficult to treat patients (Blot et al, 1991; Bray et al, 2002). The extension of meaningful survival remains a major objective for oncologists who must therefore consider the impact of treatment-related toxicity. Overall, the occurrence of the toxicities in this study was consistent with the safety profiles of irinotecan, etoposide and 5-FU/LV. The ILF combination was well tolerated with a low and acceptable incidence of haematological toxicity. Gastrointestinal toxicity is a recognised side effect of ILF therapy (Douillard et al, 2000; Saltz et al, 2000), which can require hospitalisation and urgent medical intervention (Rothenberg et al, 2001). The incidence of grade 3 or 4 diarrhoea in the current study was comparable to the previous data in gastric cancer (Blanke et al, 2001; Moehler et al, 2003; Pozzo et al, 2004). With close monitoring of the patient, suitable medication and rehydration, most cases of diarrhoea can be managed effectively and do not present a significant obstacle to the clinical use of ILF. The toxicity observed in our study was lower than that reported by Douillard et al in the pivotal European first-line trial where patients with colorectal cancer received weekly irinotecan (80 mg m−2) plus an AIO-based regimen of 24-h high-dose 5-FU (2300 mg m−2) preceded by 2-h LV 500 mg m−2, and grade 3/4 diarrhoea was reported by 44% of patients (Douillard et al, 2000). The lower toxicity in our study might be due to the lower daily doses of 5-FU (2000 mg m−2 administered over 24 h). Work is ongoing to identify those patients who carry a specific genetic polymorphism in one of the main enzymes (UGT1A1) involved in the detoxification of irinotecan and are therefore more susceptible to the side effects of irinotecan (Mathijssen et al, 2001). Such work will improve the targeting of this useful therapy and may allow appropriate prescriptive dosing schedules on an individual basis. The present study concurs with similar phase II studies in that the combination of irinotecan with continuous LV/5-FU (ILF) represents a potentially valuable new treatment option for metastatic gastric cancer and requires further evaluation (Bouche et al, 2004; Pozzo et al, 2004).\n\n**Question:** Compared to Etoposide with 5-fluorouracil plus leucovorin (ELF) what was the result of Irinotecan with high-dose 5-fluorouracil plus leucovorin (ILF) on Overall response rates?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
266
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomized Clinical Trial of Laparoscopic Versus Open Repair of the Perforated Peptic Ulcer: The LAMA Trial\n\n ABSTRACT.BACKGROUND:\nLaparoscopic surgery has become popular during the last decade, mainly because it is associated with fewer postoperative complications than the conventional open approach. It remains unclear, however, if this benefit is observed after laparoscopic correction of perforated peptic ulcer (PPU). The goal of the present study was to evaluate whether laparoscopic closure of a PPU is as safe as conventional open correction.\n\nABSTRACT.METHODS:\nThe study was based on a randomized controlled trial in which nine medical centers from the Netherlands participated. A total of 109 patients with symptoms of PPU and evidence of air under the diaphragm were scheduled to receive a PPU repair. After exclusion of 8 patients during the operation, outcomes were analyzed for laparotomy (n = 49) and for the laparoscopic procedure (n = 52).\n\nABSTRACT.RESULTS:\nOperating time in the laparoscopy group was significantly longer than in the open group (75 min versus 50 min). Differences regarding postoperative dosage of opiates and the visual analog scale (VAS) for pain scoring system were in favor of the laparoscopic procedure. The VAS score on postoperative days 1, 3, and 7 was significant lower (P < 0.05) in the laparoscopic group. Complications were equally distributed. Hospital stay was also comparable: 6.5 days in the laparoscopic group versus 8.0 days in the open group (P = 0.235).\n\nABSTRACT.CONCLUSIONS:\nLaparoscopic repair of PPU is a safe procedure compared with open repair. The results considering postoperative pain favor the laparoscopic procedure.\n\nBODY.INTRODUCTION:\nThe incidence of perforated peptic ulcer (PPU) has declined over the past several years because of the introduction of anti-ulcer medication and Helicobacter eradication therapy [1, 2]. Nevertheless the incidence and mortality of PPU is 5–10%. The mortality will increase up to 50% if the perforation exists for more than 24 h [3, 4]. There are several options for treatment of PPU, but the preferred treatment is surgery by upper abdominal laparotomy [5, 6]. Mouret et al. published the first results of laparoscopic repair in 1990 [7]. He concluded that it was a good method that probably reduced postoperative wound problems and adhesions. After the success of laparoscopic cholecystectomy and other laparoscopic procedures, it was thought that patients would have less pain and a shorter hospital stay after laparoscopic correction of PPU [8, 9]. Various studies have shown that laparoscopic suturing of the perforation is feasible, but there is still no proof of real benefits of laparoscopic correction [3, 6, 10–12]. Therefore we performed a multicenter randomized trial comparing open correction of PPU with laparoscopic repair.\n\nBODY.METHODS.PARTICIPANTS:\nPatients with symptoms of the clinical diagnosis of PPU were included in nine medical centers in the Netherlands participating in the LAMA (LAparoscopische MAagperforatie) trial between March 1999 and July 2005. Eligible patients were informed of the two surgical approaches and were invited to participate in the study. Exclusion criteria were the inability to read the Dutch language patient information booklet, inability to complete informed consent, prior upper abdominal surgery, and current pregnancy. The ethics committees of all participating institutions approved the trial.\n\nBODY.METHODS.RANDOMIZATION:\nSurgeons contacted the study coordinator after the patients had provided informed consent and randomization took place by opening a sealed envelope. The envelope randomization was based on a computer-generated list provided by the trial statistician.\n\nBODY.METHODS.SURGICAL PROCEDURE:\nAll patients received intravenous antibiotics prior to operation and were allocated for Helicobacter pylori eradication therapy according to established guidelines [13]. The open surgical procedure was performed through an upper abdominal midline incision. Closure of PPU was to be achieved by sutures alone or in combination with an omental patch. After repair of the defect cultures were drawn from the peritoneal fluid, after which the peritoneal cavity was lavaged. During lavage it was permissible to insufflate the stomach to test for leakage of the closed defect. No method was specified for closing the abdomen. Laparoscopic repair was performed with the patient and the team set up in the \"French\" position. Trocars were placed at the umbilicus (video scope) and on the left and right midclavicular line above the level of the umbilicus (instruments). If necessary a fourth trocar was placed in the subxiphoid space for lavage or retraction of the liver. Surgeons were free to use either 0° or 30° video scopes for the procedure. The rest of the procedure was identical to that described above for open repair. No method was specified for closing the trocar incisions.\n\nBODY.METHODS.POSTOPERATIVE FOLLOW-UP:\nPostoperative pain was scored by means of a visual analog scale (VAS) for pain on days 1, 3, 7, and 28 ranging from 0 (no pain) to 10 (severe pain). In addition, the days during which opiates were used by the patients were registered. All complications, minor and major, were monitored. The treating surgeons determined time of discharge on the basis of physical well-being, tolerance of a normal diet, and ability to use the stairs. For this reason, this was an unblinded trial. Postoperative hospital stay without correction for time spent in hospital as a result of non-medical reasons (inadequate care at home) was calculated. Patients were invited to attend the outpatient clinic at 4 weeks, 6 months, and one year postoperatively. They were asked to complete forms related to pain and use of analgesics.\n\nBODY.METHODS.STATISTICAL ANALYSIS:\nData analysis was carried out according to the intention-to-treat principle as established in the trial protocol. Data were collected in a database, and statistical analyses were performed with the Statistical Package for Social Sciences for Windows (SPSS 15.0, SPSS Inc., Chicago, IL). A researcher blinded to the nature of the procedures performed all data analyses. The primary outcome of the trial was duration of hospital stay. The power analysis was performed on basis of a reduction in hospital stay by 1.5 days (10–8.5 days from admission) in favor of the laparoscopically treated group using a β of 0.80 and an α of 0.05. This resulted in a trial size of 50 patients per group. The Pearson chi-squared test was used to compare categorical variables, and the Mann-Whitney U-test was used to compare continuous variables as we could not assume normal distribution because of the relatively small numbers. In Tables 1–6 medians and interquartile ranges (IQR) are reported. All data were analyzed according to the intention-to-treat principle; i.e., patients remained in their assigned group even if during the procedure the surgeon judged the patient not to be suitable for the technique assigned or if conversion was required. Null hypotheses were tested two-sided and a P value of 0.05 or less was considered statistical significant.Table 1Baseline parametersLaparoscopic repairOpen repairP valuen = 52 n = 49 Male:female ratio1.3:11.9:1Median age (years) + IQR 66 (25.8)59 (29.5)0.185Median BMI (kg/m2) + IQR23 (4)22 (5)0.118Median duration of symptoms (h) + IQR11 (17)11 (19)0.948Median blood pressure systolic (mmHg) + IQR125 (38.5)130 (36.5)0.457Median blood pressure diastolic (mmHg) + IQR75 (25.5)75 (24.5)0.596Median heart rate (beats/min) + IQR 88 (34.0)92 (21)0.403Median body temperature (°C) + IQR 36.9 (0.92)36.8 (1.5)0.658Mannheim Peritonitis Index + IQR19.5 (8.25)16 (14)0.386Median white cell count (×109/l) + IQR 12.1 (8.9)12.1 (7.75)0.467Median ASA score + IQR 1.0 (1.0)1.5 (1.0)0.902IQR interquartile range, difference between 25th percentile and 75th percentile; BMI body mass indexASA American Society of Anesthesiologists Association score\n\nBODY.RESULTS.PATIENTS:\nA total of 109 patients were included in the trial based on a high suspicion of PPU (Fig. 1). Eight patients were excluded during operation because no gastric perforation was detected or a defect in other parts of the digestive tract was uncovered. Data for these patients were not collected and the patients were excluded from further analysis. The remaining 101 patients made up the study population; their baseline parameters are given in Table 1. Fifty-two patients were randomized for laparoscopic repair and 49 for open repair of the perforation. Forty patients were female. The mean age of the patients was 61 years. The BMI (body mass index) was equally distributed between the groups, with a median of 22.5. Patients in both groups had been suffering from symptoms for a mean duration of 11 h, and those in the laparoscopy group presented with a median Mannheim Peritonitis index [14] of 19.5, whereas those in the open group had a median Mannheim Peritonitis index of 16.Fig. 1Patient flow chart Thirty patients reported the use of non-steroidal anti-inflammatory drugs (NSAIDs; 17 laparoscopic versus 13 open), and 10 patients used proton pump inhibitors (6 laparoscopic versus 4 open). Patient history revealed gastric ulcer disease in 19 patients.\n\nBODY.RESULTS.INTRAOPERATIVE FINDINGS:\nThe discovered ulcer perforations were found to have a mean diameter of 10 mm, which did not differ between groups (Table 2). Location of the perforated ulcers was distributed equally between groups. Defects were located in the prepyloric region (n = 41), the postpyloric region (n = 34), and at the pylorus (n = 20). The median volume of lavage fluid used was 1,000 ml (range: 100–5,000 ml). The surgeon decided the amount of lavage used. There was no consensus on how much was necessary. Median blood loss did not differ between groups. Skin-to-skin time differed by 25 min, favoring open repair of PPU (Table 2).Table 2Intraoperative findingsLaparoscopic repairOpen repairP valuen = 52n = 49Median size of perforation (mm) + IQR10.0 (7.0)7.0 (6.0)0.379Number of patients with defect Pyloric812 Postpyloric2014 Prepyloric1922 Median volume of lavage (ml) + IQR1,000 (1,500)1,000 (1,425)1.000 Median bloodloss (ml) + IQR10.0 (40.0)10.0 (50.0)0.423 Skin to skin time (min) + IQR75 (47.2)50 (25.5)0.000\n\nBODY.RESULTS.INTRAOPERATIVE COMPLICATIONS:\nConversion to open surgery was required in four patients (8%). Reasons for conversion included the inability to visualize the ulcer defect because of bleeding (n = 1/52; 2%), inability to reach the defect because of perforation in the vicinity of the gastroduodenal ligament and because of a dorsal gastric ulcer (n = 2/52; 4%), and inability to find the perforation (n = 1/52; 2%).\n\nBODY.RESULTS.POSTOPERATIVE COMPLICATIONS:\nComplications were statistically equally distributed between the two groups (Table 3). There were 12 complications in 9 patients in the laparoscopic group and 24 complications in 15 patients in the open group. Mortality was 4% in the laparoscopic group and 8% in the open group. In the laparoscopic group death was caused by sepsis due to leakage at the repair site. In the open group 3 patients died because of pulmonary problems (ARDS, pneumonia), and 1 patient died after complications following a cerebrovascular accident (CVA) combined with respiratory insufficiency.Table 3Postoperative complicationsLaparoscopic repairOpen repairP valuen = 52n = 49Pneumonia21Respiratory insufficiency13ARDS1Cardiac problems22Sepsis31Leakage at repair site2Abscess3Ileus1Fascial dehiscence1Wound infection3Urinary tract infection2Incisional hernia1Cerebrovascular accident1Death24Total complications12240.061Total of patients with complications ≥19 (18%)15 (36%)\n\nBODY.RESULTS.DISCHARGE:\nTime to discharge was similar for the two groups, with a median difference of 1.5 days (Table 4). Nasogastric decompression could be stopped after 2–3 days in both groups (Table 4).Table 4Duration of hospital stay, nasogastric decompressionLaparoscopic repairOpen repairP valuen = 52n = 49Median hospital stay (days) + IQR6.5 (9.3)8.0 (7.3)0.235Median duration of nasogastric decompression (days) + IQR2.0 (3.0)3.0 (1.3)0.334\n\nBODY.RESULTS.PAIN:\nVisual analog pain scores were in favor of laparoscopic repair (Table 5; p < 0.005). Although the median duration of opiate use in the two groups was 1.0, the mean duration in the open group was found to be 0.6 days longer than in the laparoscopic group (Table 6).Table 5Postoperative painMedian VAS pain scoreMedian VAS pain scoreP valueLaparoscopic repairOpen repairDay 1 + IQR3.8 (3.0)5.15 (2.5)0.001Day 3 + IQR2.1 (2.5)3.0 (2.4)0.035Day 7 + IQR1.0 (2.0)1.85 (2.8)0.036Day 28 + IQR0.3 (0.7)0.0 (1.7)0.748Table 6Postoperative opiate usageOpiate requirementOpiate requirementP valueLaparoscopic repairOpen repairMedian duration (days) + IQR1.0 (1.25)1.0 (1.0)0.007Mean duration (days) ± SD1.0 ± 0.91.6 ± 0.90.007\n\nBODY.RESULTS.VAS APPEARANCE OF SCAR:\nThe VAS score for appearance of the scar left by the respective procedures (subjectively recorded in the same way as pain) differed by 2.3 points, favoring the laparoscopic procedure (7.7 vs. 5.4; P = 0.033)\n\nBODY.DISCUSSION:\nThe need for surgery for PPU has declined enormously in Europe and America with reported rates ranging from 50% to 80%, thanks to effective gastric acid-reducing medication [15]. For this reason, as well as because many surgeons prefer upper laparotomy, it took more time than expected to include 100 patients in our study. Reasons given by surgeons who prefer open repair were that it is a more familiar procedure and it can be completed faster than laparoscopy. It was also noted that patients often undergo operation at night, when the surgeon on call was not always laparoscopically trained. Other randomized trials have already shown the feasibility of laparoscopic repair of PPU [3, 4, 6, 10]. Only a few had more than 100 patients, and some emphasized results from subgroups of patients [8, 11, 12]. We did not subdivide our patients and included patients with risk factors for developing sepsis or conversion [10]. In eight of the original 109 patients (7%) it became evident during the operation that the patient had a diagnosis different from PPU. In the patients who were randomized for laparoscopy this discovery revealed the benefit of laparoscopy as a diagnostic procedure indicating either an upper or lower abdominoplasty or continuation of the laparoscopy [16]. Conversion rate in the laparoscopy group was 8% (4/52). This is much lower than that reported in literature, where conversion rates as high as 60% were found [3, 4, 6]. This maybe partially explained by the fact that only trained and experienced laparoscopic surgeons (those performing at least 50 laparoscopic procedures a year) participated in this trial, confirming the belief that this procedure should only be done by experienced surgeons [3–5]. Operating time was significantly longer in the laparoscopy group (75 min versus 50 min), which is comparable to reports in the literature [3, 10]. A possible explanation for the longer operative time is that laparoscopic suturing is more demanding [9, 17], especially if the edges of the perforation are infiltrated and friable. Sutures easily tear out and it is more difficult to take large bites and to tie knots properly. Use of a single-stitch method described by Siu et al. [18], fibrin glue, or a patch might solve this problem [12, 19]. Another reason for the increase in operating time is the irrigation procedure. Irrigating through a 5-mm or even a 10-mm trocar is time consuming, and suction of fluid decreases the volume of gas and reduces the pneumoperitoneum. There is no evidence that irrigation lowers the risk of sepsis [20], so it might only be necessary if there are food remnants in the abdomen; perhaps there is no need for it at all. One of the suspected risks of laparoscopic surgery is that of inducing sepsis by increasing bacterial translocation while establishing a pneumoperitoneum [6]. This danger could not be confirmed in our trial. Furthermore data suggest that there is a decrease in septic abdominal complications when laparosopic surgery is used [8]. Evidence already exists that laparoscopic correction of PPU causes less postoperative pain [6, 12, 17, 18]. The meta-analysis published by Lau [6] showed that eight out of ten studies showed a significant reduction in dosage of analgesics required in the laparoscopic group. Also, the three studies that had included VAS pain scores showed consistently lower pain scores, as was observed in our study as well. Whether this will lead to a better quality of life for patients, especially during the first weeks after surgery still needs to be analyzed. Although patients in our series who underwent laparoscopy had less postoperative pain, there was no difference in the length of hospital stay in our two groups. In fact, hospital stay overall in our patients was very long. This was most likely caused by the fact that many patients, especially the elderly, could not be discharged because of organizational reasons. Of the 101 patients, 41% were 70 years or older (24 in the laparoscopic group versus 17 in the open group). It appears that the age of PPU patients is increasing, and this will eventually represent a significant problem in the future [2, 3]. One benefit of the laparoscopic procedure not often mentioned in literature [6] is cosmetic outcome. Nowadays patients are aware of this benefit, and sometimes this is the reason why they demand laparoscopic surgery. In conclusion, the results of the LAMA trial confirm the results of other trials that laparoscopic correction of PPU is safe, feasible for the experienced laparoscopic surgeon, and causes less postoperative pain. Operating time was longer in the laparoscopic group and there was no difference in length of hospital stay or incidence of postoperative complications.\n\n**Question:** Compared to Conventional open approach what was the result of Laparoscopic surgery on Visual analog scale pain score at day 7?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
380
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Formoterol DPI 9 μg what was the result of Budesonide/formoterol pMDI 160/9 μg on Incidence of pneumonia?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
256
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Impact and Process Evaluation of Integrated Community and Clinic-Based HIV-1 Control: A Cluster-Randomised Trial in Eastern Zimbabwe\n\n ABSTRACT.BACKGROUND:\nHIV-1 control in sub-Saharan Africa requires cost-effective and sustainable programmes that promote behaviour change and reduce cofactor sexually transmitted infections (STIs) at the population and individual levels.\n\nABSTRACT.METHODS AND FINDINGS:\nWe measured the feasibility of community-based peer education, free condom distribution, income-generating projects, and clinic-based STI treatment and counselling services and evaluated their impact on the incidence of HIV-1 measured over a 3-y period in a cluster-randomised controlled trial in eastern Zimbabwe. Analysis of primary outcomes was on an intention-to-treat basis. The income-generating projects proved impossible to implement in the prevailing economic climate. Despite greater programme activity and knowledge in the intervention communities, the incidence rate ratio of HIV-1 was 1.27 (95% confidence interval [CI] 0.92–1.75) compared to the control communities. No evidence was found for reduced incidence of self-reported STI symptoms or high-risk sexual behaviour in the intervention communities. Males who attended programme meetings had lower HIV-1 incidence (incidence rate ratio 0.48, 95% CI 0.24–0.98), and fewer men who attended programme meetings reported unprotected sex with casual partners (odds ratio 0.45, 95% CI 0.28–0.75). More male STI patients in the intervention communities reported cessation of symptoms (odds ratio 2.49, 95% CI 1.21–5.12).\n\nABSTRACT.CONCLUSIONS:\nIntegrated peer education, condom distribution, and syndromic STI management did not reduce population-level HIV-1 incidence in a declining epidemic, despite reducing HIV-1 incidence in the immediate male target group. Our results highlight the need to assess the community-level impact of interventions that are effective amongst targeted population sub-groups.\n\nBODY.INTRODUCTION:\nHIV-1–prevalence declines may now be occurring in some sub-Saharan African countries [1]. However, there remains little direct evidence that prevention measures—rather than natural HIV-1 epidemic dynamics [2] or behaviour change prompted by mortality [3]—have contributed to the slowing of HIV-1 epidemics [4,5]. Syndromic management of sexually transmitted infections (STIs) proved effective early in an HIV-1 epidemic in north-west Tanzania [6]. Peer education to promote safe behaviours showed promise in early process evaluations [7], but a randomised controlled trial (RCT) of factory workers in Harare, Zimbabwe, done in the mid-1990s, proved inconclusive [8]. Subsequent RCTs of syndromic management [9] and mass treatment of STIs [10], together with an information, education, and communication (IEC) behaviour-change programme [9], showed no effect in more mature epidemics. Integrated implementation of synergistic community-based HIV-1 control strategies could be a more cost-effective and sustainable approach to HIV-1 prevention than parallel application of vertical (top-down) programmes [11]. One scientific evaluation of such a strategy has been reported in which a combination of IEC activities amongst the general population and syndromic STI management showed no impact on HIV-1 incidence at the population level [9], although participation in the IEC activities was associated with reduced HIV-1 infection in women [12]. We conducted a cluster-RCT to test the hypothesis that integrated implementation of combined community- and clinic-based HIV-1 prevention, in which IEC activities focus primarily on high-risk populations, can be feasible and effective in reducing HIV-1 incidence in a major maturing epidemic in eastern Zimbabwe (Protocols S1 and S2; Text S1 and S2).\n\nBODY.METHODS.PARTICIPANTS AND RANDOMISATION PROCEDURE:\nThe study communities comprised six pairs of communities matched by socio-economic type—small town, tea/coffee estate, forestry plantation, roadside trading settlement, and subsistence farming area (two pairs) (Figure 1). Each community included at least one Government or Mission health centre. It was anticipated that HIV-1 incidence would be similar within each pair of communities. Within each pair, one community was assigned at random (un-blinded coin toss by a Ministry of Health official witnessed by programme and research personnel) to receive the additional intervention and the other to be the control. These procedures were designed to ensure that Mission, non-governmental organisation, and private sector programmes (for details, please refer to the following section) would be distributed evenly between intervention and control sites. Figure 1Location of Intervention and Control Communities in Manicaland Province, Eastern Zimbabwe We assessed the effect of the intervention using results from laboratory tests for HIV-1 infection and questionnaire data collected in the baseline and 3-y follow-up rounds of a population-based, closed-cohort survey. The 12 study communities were enumerated in a phased manner, with paired communities being enumerated consecutively to minimise the effects of any seasonal factors. HIV-1–prevention activities were commenced in each intervention community shortly after completion of the baseline survey in that community. In each community, individuals eligible for the study were identified in the first round using data from household listings prepared in an initial census. All males and females aged 17–54 y and 15–44 y at last birthday (the age groups expected to have the highest incidence of HIV infection), respectively, who had slept in a household in the community for at least four nights in the previous month, and who had also done so at the same time 1 y earlier, were considered eligible for the study. In heterosexually driven HIV-1 epidemics, risk of infection can be correlated amongst marital partners [13]. Therefore, to maximise statistical power to detect differences in HIV-1 incidence, enrolment was restricted to one randomly selected member per marital group.\n\nBODY.METHODS.INTERVENTIONS:\nIntervention and control communities were to receive standard Government services including basic syndromic STI management, condom distribution from health clinics and Zimbabwe National Family Planning Council outlets, home-based care, and limited HIV/AIDS–focussed IEC activities (e.g., occasional AIDS-awareness meetings and distribution of posters and leaflets). In addition, social marketing of male and female condoms would be provided through an ongoing national programme [14]. The intervention comprised targeted and population-level strategies to promote safer sexual behaviour and to improve treatment of STIs that facilitate HIV-1 transmission. The intervention strategies were implemented by two local non-governmental organisations (Family AIDS Caring Trust and the Biomedical Research and Training Institute) and the Zimbabwe Ministry of Health and Child Welfare through an integrated programme of community- and clinic-based activities. Integration of the individual programme components was achieved through the joint involvement of the participating agencies in the planning and implementation of activities and through the inclusion of biomedical and behavioural aspects within each component. The programme design comprised three key components: (1) peer education and condom distribution amongst commercial sex workers and male clients at workplaces and in the general community, supported by income-generating projects; (2) strengthened syndromic management of STI services at local health centres; and (3) open days with HIV/AIDS IEC activities at health centres to promote safer sexual behaviour and to increase the uptake of local STI treatment services. The peer-education component was based on a model which had been developed by the Project Support Group at the University of Zimbabwe [7] and which had been widely implemented within Zimbabwe and neighbouring countries. Activities were held weekly at workplaces and at locations within the general community (e.g., beer halls and markets) where casual relationships were most frequently formed [15]. The target population comprised sex workers and male clients who form a bridge population in HIV transmission [16] between sex workers and the monogamous (or serial monogamous) majority of women [17,18]. It was posited that the high HIV-1 incidence observed amongst young women could be reduced by altering the behaviour of their older male partners whose own behaviour was intrinsically more risky [19]. The behavioural component would be reinforced in counselling sessions with STI patients and through micro-credit income-generating projects to reduce unmarried women's dependence on commercial sex work. The micro-credit scheme consisted of small interest-free loans repayable over 10 mo, provided to groups and to individuals together with training in small-business management. The targeted activities would be extended to the general population through open days held at local health centres. Besides providing basic HIV/AIDS information, it was envisaged that programme meetings and activities, by their continuous nature, would sustain high levels of awareness of the risks of HIV transmission and would facilitate renegotiation of community social norms, making safer behaviours easier to adopt. The key messages of the programme were: (1) remain faithful to one regular sexual partner; (2) use condoms consistently with any casual sexual partners; and (3) seek prompt and effective treatment for any STIs. Syndromic management of STIs at primary healthcare centres was first introduced in Zimbabwe in the 1980s [20] and formed the basis of STI diagnosis and treatment services at baseline in the intervention and control communities. It was envisaged that these services could be strengthened and made more effective through a programme of regular classroom training and on-site supervision of nursing staff, through the introduction of training in systemic counselling for STI patients, and through the provision of small quantities of treatment drugs to cover delays in routine supplies. Quality-assurance procedures applied in the intervention communities included pre- and post-training tests for peer educators and, for nursing staff, attending the syndromic STI management and systemic counselling courses, regular on-site supervision (including random spot checks) and training, refresher courses, routine planning meetings and monitoring of service statistics, and quarterly workshops where detailed programme procedures were reviewed and updated. An interim qualitative process evaluation of intervention activities was conducted during the inter-survey period, and a report on the findings was provided to the implementing organisations.\n\nBODY.METHODS.OUTCOME AND PROCESS MEASURES:\nThe primary outcome of the study was HIV-1 incidence at the community level amongst individuals who were uninfected at baseline. Blood was collected onto Whatman No. 3 filter paper and transported to the Biomedical Research and Training Institute laboratory in Harare. Blood spots were air dried at 4 °C and, for long-term (>1 mo) storage, were kept at −20 °C. For baseline studies, blood was eluted into phosphate-buffered saline, and antibodies to HIV were detected using a dipstick dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, http://www.path.org; produced locally in Thailand]) and a standard protocol [21,22]. All positive results and a 10% sample of negative results were confirmed using a plate EIA (Abbott Third-Generation HIV-1/HIV-2 EIA [http://www.abbott.com] or Genelavia MIXT HIV-1/HIV-2 [Sanofi Diagnostics Pasteur, Marnes La Coquette, France]). At follow-up, a similar protocol was followed. Only the samples from those participants recorded as being HIV seronegative at baseline were tested at follow-up, again using a dot EIA (ICL-HIV-1/HIV-2 Dipstick, [PATH, produced locally in India]). Where seroconversion was indicated, the frozen stored baseline sample was retested to confirm the original negative result using the same dot EIA test. Where the baseline result remained negative, the Abbott EIA test was used to confirm both baseline and follow-up results. The change in place of manufacture of the dot EIA and the exclusive use of Abbott test kits to confirm positive sera at follow-up was due only to changes in the supply of test reagents, and not to perceived changes in sensitivity or specificity [23]. Apart from the principal investigators (based in Harare, London and Oxford) and those nurses given permission by participants requesting voluntary counselling and testing (VCT), all research personnel remained blind to the HIV-1 status of individual participants. Secondary outcomes, measured at the community and individual level, were self-reported genital ulcers and urethral or vaginal discharge in the past year (STI cases), STI treatment effectiveness (self-reported cessation of symptoms), indicators of sexual and health-seeking behaviour change, and HIV/AIDS knowledge. The behaviour-change variables assessed were sexual debut, sexual partner change in the past year, non-regular partnerships in the past month, and unprotected sex with regular and casual partners in the past 3 y. The data on sexual partnerships and condom use were collected using the Informal Confidential Voting Interview method for 75% of respondents selected at random in the first round of the survey. This method includes procedures to build rapport, ensure a non-judgemental interview approach, and provide reassurance that there are no right or wrong answers to questions of a personal nature, and uses a simple secret voting-box system to reduce embarrassment and guarantee confidentiality in low-development settings [18]. Its use has been shown to be associated with greater disclosure of socially proscribed behaviour in the study population [24]. Process indicators examined comprised changes in knowledge and psychosocial status and indicators of programme coverage and quality.\n\nBODY.METHODS.SAMPLE-SIZE CALCULATIONS:\nInitial sample-size calculations assumed 20% HIV-1 prevalence at baseline, 30% loss to follow-up after 2 y, and 80% power to detect a 40% reduction in HIV-1 incidence in the intervention communities compared with control communities, assuming a background yearly incidence of 2%. Based on six pairs of communities and a co-efficient of variation between communities of 0.15, the required sample size in each community was 1,000. Funding constraints and slower than anticipated implementation of intervention activities led to revisions of the sample size for each community to 800 and the length of follow-up to 3 y, respectively. Assuming a proportionate increase in loss to follow-up to 41%, these arrangements also yielded 80% power to detect a 40% reduction in HIV-1 incidence.\n\nBODY.METHODS.STATISTICAL METHODS:\nTo test the randomisation with small numbers of communities, HIV-1 prevalence, STI history, and socio-demographic characteristics were compared at baseline for study participants in the intervention and control communities, together with uptake of STI treatment and VCT services offered at baseline. Outcome and process indicators were compared for intervention versus control communities. Analysis of the primary outcome was on an intention-to-treat basis. Incident events and person-years at risk of seroconversion were used to calculate HIV-1 incidence rates and unadjusted and adjusted incidence rate ratios (IRR) with 95% confidence intervals (CIs) for each pair of communities. Adjustment was made for sex, 3-y age group, and community-level baseline HIV prevalence. The overall IRRs (unadjusted and adjusted) were taken to be the geometric means of the IRRs for the six pairs of communities. We calculated 95% CIs for each geometric mean as geometric mean ± 1.96 × standard error of the geometric mean. Paired student t-tests on the logarithms of the pair-specific IRRs were used to test whether these differed significantly from unity [25]. The coefficient of variation between communities was calculated based on baseline HIV prevalence using a standard procedure for pair-matched studies [26]. Analyses of prevalence for secondary outcome and process variables were conducted separately for male and female respondents seen at both survey rounds by fitting logistic regression models to the individual-level data and adjusting for community pair and, where available, value of variable at baseline. Since most programme activities were targeted and overall coverage of programme activities was therefore limited, sub-group analyses, adjusted for community pair, were done for HIV-1 incidence and behavioural outcomes to assess the individual-level effects of attendance at programme meetings. Data were entered and validated using SPSS-PC (http://calcnet.mth.cmich.edu/org/spss/index.htm) and data analysis was conducted in Stata version 7 (http://www.stata.com). Statistical tests were double-sided and results were taken to be significant at the 5% level.\n\nBODY.METHODS.ETHICAL APPROVAL:\nAll study participants in the intervention and control communities were offered free VCT for HIV-1, an information sheet on HIV/AIDS, results from a diagnostic test for Trichomonas vaginalis [27] (done at baseline only), and free treatment for T. vaginalis and other STIs from a research nurse. Testing and treatment for T. vaginalis was provided because the prevalence of other curable STIs was low in the study areas [22]. Antibodies reactive with T. vaginalis were detected in DBS eluates following a previously described procedure [27,28]. Written informed consent was sought as a condition of enrolment and continuation in the study. Prior ethical approval was obtained from the Research Council of Zimbabwe, number 02187; the Applied and Qualitative Research Ethics Committee in Oxford, United Kingdom, N97.039; and the UNAIDS Research Ethics Committee, ERC 98/03.\n\nBODY.RESULTS.PARTICIPANT FLOW:\nIn round 1 of the census (July 1998 to February 2000), 5,943 and 6,037 eligible individuals in the intervention (total population size 18,104) and control (18,633) communities, respectively, were selected for recruitment into the study cohort (Figure 2). In round 2, 3 y later (July 2001 to February 2003), 1,044 (23%) and 1,144 (26%) of baseline respondents who were still alive had migrated away from the intervention and control communities, respectively, and were therefore lost to follow-up (Figure 2). At both baseline and follow-up, migrants and non-migrants had similar risks of HIV-1 infection and associated behaviour [29]. Of those still resident in the intervention and control communities, 2,664 (75%) and 2,564 (77%), respectively, were interviewed and blood samples taken for a second time. Temporary absence from the usual place of residence was the main reason for non-participation in the intervention (n = 794, 95%) and control (n = 698, 94%) communities. The overall proportions of baseline respondents followed up at the end of the study were 55% and 56% in the intervention and control communities, respectively. The median follow-up of communities was 3.0 y (range of median within communities, 3.0–3.1). Figure 2Flow-Chart Comparing Participation and Follow-Up Rates in the Intervention and Control CommunitiesIndividuals enrolled in round 1 and still resident in the study communities were considered eligible for participation in round 2.\n\nBODY.RESULTS.BASELINE DATA:\nHIV-1 prevalence was higher in the intervention communities than in the control communities (24% versus 21%, risk ratio 1.13 [95% CI 1.05–1.22], p = 0.001). T. vaginalis infection, secondary school education, and spatial mobility were more common in the control communities, whilst history of genital discharge and uptake of STI treatment and VCT services offered in the survey were low overall but more frequent in the intervention communities (Table 1). However, the differences in each case were small and were unlikely to be clinically meaningful. Table 1 Baseline Characteristics of the Study Populations \n\nBODY.RESULTS.OUTCOMES AND ESTIMATION:\nMedian follow-up per person was 2.9 y (range 1.4–3.9) and 3.0 y (range 1.5–4.1) in the intervention and control communities, respectively. In total, 4,052 individuals had 212 incident events of HIV-1 during 12,009 person-years at risk, giving an HIV-1 incidence rate of 1.77 per 100 person-years at risk. HIV-1 incidence was higher in communities with higher baseline HIV prevalence (IRR 11.49 [95% CI 1.80–73.40], p = 0.010), but this difference disappeared after adjustment for stratification by community type (p = 0.8). HIV-1 incidence was higher in the intervention communities than in the control communities overall, and in each community type, except in the forestry plantations where it was almost identical (Table 2). The difference was not significant after adjustment for sex, age group, and baseline HIV prevalence (IRR 1.27 [95% CI 0.92–1.75], p = 0.012). The observed coefficient of between-community variation was 0.14. Table 2 HIV Prevalence at Baseline and HIV Incidence and IRRs for Intervention Versus Control Communities Looking at outcome indicators for community members (rather than for communities—the unit of randomisation), self-reported STI symptoms were similar in both sets of communities (Table 3). Treatment for STI symptoms in males was effective more frequently in the intervention communities, with men in the intervention community in five of the six matched pairs reporting reduced symptom recurrence. However, more young women in the intervention than in the control communities had started sex, and reports of unprotected sex with a casual partner in the study period were more common in the intervention communities. No differences were observed in consistent condom use with regular partners between the two sets of communities. In the intervention communities, knowledge about HIV/AIDS was enhanced amongst men, and more respondents reported a close relative or family member with AIDS (sex- and age-adjusted prevalence odds ratio 1.22 [95% CI 1.05–1.42], p = 0.009). Slightly more women in the intervention communities reported that condom use within marriage was becoming acceptable, but a greater proportion of men agreed with the statement that \"condoms reduce the pleasure of sex\". Table 3 Biomedical, Sexual Behaviour, and Psychological Outcomes at Follow-up by Residence in the Intervention and Control Communities A total of 63,261 peer-education meetings were held, and 6.8 million condoms were distributed by the programme in the intervention communities (Table 4). Outputs increased over time as new communities entered the programme. However, owing to high inflation and economic decline, the micro-credit income-generating projects proved impossible to implement. We were able to obtain data on STI episodes treated at clinics in the 11 out of 12 study communities that reported cases to the administrative districts of Mutasa and Makoni. In the three intervention communities each in Mutasa and Makoni, STI cases fell by 66% and 51%, respectively, over the 3-y study period. Similar declines of 67% and 52% occurred at clinics in the four control communities in Mutasa and the one control community in Makoni. Coverage of training in syndromic STI management and systemic counselling for nursing staff was high (Table 4). Table 4 Summary of Service Statistics on Programme Output Most of the activities were targeted at high-risk groups. In the general population sample interviewed in the follow-up survey, 1,779 (35%) and 647 (13%) of 5,098 respondents reported attending an HIV/AIDS meeting and a programme meeting, respectively (Table 5). More respondents in the intervention communities than in the control communities attended an HIV/AIDS meeting (41% versus 28%, prevalence rate ratio 1.44 [95% CI 1.33–1.56], p < 0.001) and a programme meeting (20% versus 5%, 4.27 [95% CI 3.52–5.17], p < 0.001), and participation was higher among men than women (prevalence rate ratio 1.32 [95% CI 1.14–1.53], p = 0.002). Fewer women in the intervention communities had heard about HIV/AIDS from external sources or believed that STI drugs were available at their local clinics. Sixty-two (2%) out of 2,528 respondents in the control communities reported spending at least 1 d in the past month in the intervention communities; the equivalent number for respondents in the intervention communities visiting control communities was 70 (3%) out of 2,683. Table 5 Intervention Coverage \n\nBODY.RESULTS.ANCILLARY ANALYSES:\nIn exploratory analysis to assess where the intervention failed, we found that HIV-1 incidence was reduced in males (IRR 0.48 [95% CI 0.24–0.98], p = 0.044) who reported attending programme meetings, after adjustment for the targeting of activities to groups with high-risk behaviour (Table 6). Amongst men who reported one or more casual sexual partners in the past 3 y, fewer of those who attended meetings reported unprotected sex with these partners (prevalence odds ratio 0.45 [95% CI 0.27–0.75], p = 0.002). HIV-1 incidence was not associated with programme participation in women. Table 6 HIV Incidence and IRRs by Meeting Attendance and Sex \n\nBODY.DISCUSSION.INTERPRETATION:\nWe conducted a scientific trial of the feasibility and impact of an integrated community- and clinic-based HIV-1–prevention intervention. The income-generating projects apart, the intervention activities were feasible. The outputs of the programme were extensive with more than 63,000 meetings being conducted and almost 7 million condoms distributed by trained peer educators. Programme messages were considered relevant and realistic. Local STI treatment and counselling services were strengthened and promoted in accordance with the intervention protocol. For male participants, these activities improved HIV/AIDS knowledge, increased the effectiveness of STI treatment, increased consistent condom use with casual partners, and reduced HIV-1 incidence. However, the cluster-RCT results clearly show that the intervention had no positive impact at the community level and suggest possible detrimental effects on the onset of female sexual activity and condom use with casual partners over a 3-y timeframe. Did the cluster-RCT design fail to capture the true effect of the intervention? There are three possibilities: (1) inadequate statistical power; (2) insufficient follow-up; and (3) contamination of intervention within control communities. The study design provided adequate statistical power to detect a meaningful average reduction (40%) in HIV-1 incidence in the intervention versus the control communities over a 3-y observation period. In hindsight, an effect size of 40% was too optimistic and the study had insufficient power to detect a smaller effect. However, there was no trend in the results towards reduced HIV-1 incidence in the intervention communities. Largely due to migration, attrition was close to that anticipated in the study design and was comparable to other recent cohort studies [6,10,9,30]. Migrants had similar characteristics and sexual behaviour to non-migrants [29]. The results of the exploratory sub-group analysis generate the hypothesis that high-risk behaviour was reduced in males attending programme meetings but did not translate into a wider impact on HIV-1 incidence at the population level. Changes in core and bridge populations may take more time to reflect in the general population than was observed in the trial. However, a longer period of follow-up would have increased attrition, and the finding of a possible adverse effect at the population level meant that it would not have been ethical to continue with the same intervention. Future trials of behaviour-change interventions may need to include multiple rounds with phased recruitment and (where interim results are favourable) may need to consider phased intervention implementation. We minimised intervention contamination by selecting physically separated study communities, and movements between intervention and control communities were rare. However, a similar peer-education programme was implemented in one control community (small town), and HIV-1–prevention activity was considerable in all control communities that also had greater access to information from external sources. In some cases, programme messages (e.g., promotion of condom use) conflicted with those of other agencies working in the intervention communities. The effects of these other programmes could have limited our ability to detect a reduction in HIV-1 incidence caused by the current intervention. The absence of an observed effect of the intervention was not explained by differences in HIV-1 prevalence, sexual behaviour, STI cofactors, mobility, or socio-demographic composition at baseline. The earlier sexual debut in females and greater unprotected sex with casual partners seen in the intervention communities during the study period were not present at baseline but could reflect increased willingness to report high-risk behaviours in settings where there was more open discourse about HIV and AIDS. The peer-education programme could have had some effect for male but not for unmarried female participants. Preliminary findings from subsequent qualitative investigations indicate that, in the predominantly rural communities in which the study was conducted, poverty and the associated failure of income-generating projects meant that some peer educators were unable to maintain safer behaviours. Given their increased visibility within the community—intended to enhance their status and self-esteem and, thus, to reinforce their commitment to and role as models for behaviour change—they may, inadvertently, have served as negative role models and, thereby, may have contributed to the greater female early-age sexual activity. Free distribution of condoms by women still engaging in unprotected commercial sex led to their being poorly valued and reinforced their association with promiscuity.\n\nBODY.DISCUSSION.GENERALISABILITY OF FINDINGS:\nEpidemiological context can affect the impact of interventions [31], and structural obstacles can limit the pace and extent to which activities are implemented and the quality of these activities [32]. The HIV-1 epidemic stabilised in eastern Zimbabwe during the study period, with HIV-1 prevalence declining by 40%–50% in young adults [23]. This decline was accompanied by delayed sexual debut, reduced sexual partner change, and consistent condom use with casual partners [33,23]. Prevalence of syphilis, gonorrhoea, and Chlamydia is low, but non-curable herpes simplex virus type 2 remains common [22]. Risk reduction makes transmission more fragile, and an intervention could have a larger effect when set against secular behavioural changes [2]. Mathematical model simulations suggest that there would also be a greater chance of detecting a significant effect of the intervention even though there would be fewer seroconversions to power the calculation [34,35]. Structural obstacles to intervention implementation included HIV/AIDS mortality which disrupted the programme by claiming the lives of two programme coordinators and several of the nursing staff and peer-educators. Economic decline made the income-generating projects unfeasible and reduced the effectiveness of other components of the intervention. We believe that the coverage of the peer-education programme was satisfactory, given the focus on highly sexually active individuals. Meeting coverage could have been under-estimated in the survey since one-to-one discussions and activities at beer halls and other public places may not have been recognised as meetings by those present. However, the high level of spatial mobility limited the number of people who were reached at the required level of intensity and consistency, whilst national shortages of foreign currency restricted fuel and drug supplies, hampered attempts to extend community activities into the more remote rural areas, and disrupted the STI treatment programme in both the intervention and control communities. The intervention that we evaluated could have greater effect where an HIV-1 epidemic is younger, HIV-1 incidence is greater, local sexual networks are less diffuse, background STI control is weak, herpes simplex virus type 2 is less common, population mobility is lower, and/or the socio-economic climate is stable. We cannot rule out an effect of peer education in the urban intervention community since similar activities were implemented in the control community. Targeted peer education may work better in towns where bar-based sex work is more extensive. The absence of reduced HIV-1 incidence in farming estates reinforces doubts raised by the Harare factory workers study [8] concerning the efficacy of workplace peer education.\n\nBODY.DISCUSSION.OVERALL EVIDENCE:\nThese findings are important since the strategies evaluated—i.e., peer education, condom distribution, and syndromic STI management—are theory-based, have the potential for independent effects [11], and are widely promoted [36,37]. Syndromic STI management was effective in a nascent epidemic [6]. However, our disappointing findings echo those from recent trials [9,12] and emphasise the need for alternative strategies of behaviour-change promotion. Social marketing of condoms [14], larger poverty-alleviation programmes to reduce women's reliance on sex work—based on skills training and careful market research rather than on small-scale income-generating projects—and strategies which reach beyond high-activity core groups, such as the Popular Opinion Leader programme [38,39], and client-centred counselling [40], could be more viable and effective in reducing HIV-1 transmission in rural areas. Given the necessary economic conditions, unmarried women may still play a useful role in bar-based programmes since beer halls remain foci for high-risk behaviour [41,15].\n\nBODY.SUPPORTING INFORMATION.TRIAL REGISTRATION:\nThis trial has the registration number ISRNCT00390949 in the International Standard Randomized Controlled Trial Number Register. Found at: http://www.clinicaltrials.gov/ct/show/NCT00390949?order=1\n Protocol S1Protocol(35 KB DOC)Click here for additional data file. Protocol S2Revisions to Protocol(35 KB DOC)Click here for additional data file. Text S1CONSORT Checklist(48 KB DOC)Click here for additional data file. Text S2Ethical Approval, Information Letter, and Consent Forms(2.8 MB PDF)Click here for additional data file.\n\n**Question:** Compared to Baseline communities that received standard Government services what was the result of Baseline communities that received additional education and preventive measures on Spatial mobility?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
285
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Randomised phase II evaluation of irinotecan plus high-dose 5-fluorouracil and leucovorin (ILF) \n\n ABSTRACT:\nAn open-label randomised comparison of efficacy and tolerability of irinotecan plus high-dose 5-fluorouracil (5-FU) and leucovorin (LV) (ILF) with etoposide plus 5-FU/LV (ELF) in patients with untreated metastatic or locally advanced gastric cancer. One cycle of ILF comprised six once-weekly infusions of irinotecan 80 mg m−2, LV 500 mg m−2, 24-h 5-FU 2000 mg m−2, and ELF comprised three once-daily doses of etoposide 120 mg m−2, LV 300 mg m−2, 5-FU 500 mg m−2. In all, 56 patients received ILF and 58 ELF. Median age was 62 years, Karnofsky performance 90%, and disease status was comparable for both arms. The objective clinical response rates after 14 weeks treatment (primary end point) were 30% for ILF and 17% for ELF (risk ratio (RR) 0.57, 95% confidence interval (CI) 0.29–1.13, P=0.0766). Overall response rates over the entire treatment period for ILF and ELF were 43 and 24%, respectively (RR 0.56, 95% CI 0.33–0.97; P=0.0467). For ILF and ELF, respectively, median progression-free survival was 4.5 vs 2.3 months, time to treatment failure was 3.6 vs 2.2 months (P=0.4542), and overall survival was 10.8 vs 8.3 months (P=0.2818). Both regimens were well tolerated, the main grade 3/4 toxicities being diarrhoea (18%, ILF) and neutropenia (57%, ELF). The data from this randomised phase II study indicate that ILF provides a better response rate than ELF, and that ILF should be investigated further for the treatment of metastatic gastric cancer.\n\nBODY:\nGastric cancer is the fourth most common cancer in Europe and the third leading cause of cancer mortality (Bray et al, 2002). Although gastric cancer has declined over the past 50 years, the incidence of tumours at the gastro-oesophageal junction has increased (Blot et al, 1991). The use of chemotherapy for the management of patients with advanced gastric cancer, who have limited treatment options (Hohenberger and Gretschel, 2003), has only become widely acceptable over the last 20 years (Glimelius et al, 1997; Murad et al, 1993; Pyrhonen et al, 1995). 5-Fluorouracil (5-FU), usually in combination with leucovorin (LV, also referred to as folinic acid), forms the basis of most chemotherapy regimens used for the treatment of gastric cancer. A randomised phase III trial compared combinations of 5-FU with other active drugs in advanced gastric cancer: etoposide, LV and 5-FU (ELF) vs infusional 5-FU plus cisplatin (FUP) vs 5-FU, doxorubicin and methotrexate (FAMTX) (Vanhoefer et al, 2000). The overall response rates (ORRs) ranged from 9% (ELF) to 20% (FUP) and median survival times were between 6.7 months (FAMTX) and 7.2 months (both ELF and FUP). The observed differences were not statistically significant and there is still no definitive regimen for the treatment of gastric cancer. The combination of epirubicin, cisplatin and continuous infusion 5-FU (ECF) has been proposed as a standard first-line therapy for gastric cancer as a consequence of its significantly improved response rate (46%) and survival (8.7 months) when compared with FAMTX (Waters et al, 1999). More recently, ELF has been shown to provide better disease control (complete response (CR)+partial response (PR)+stable disease (SD)) for patients with proximal rather than distal tumours (85 vs 48%, P=0.04) (Schulze-Bergkamen et al, 2002). Tolerability, toxicity and ease of administration have become major determinants for selecting an appropriate therapy and ELF has emerged as a convenient, well-tolerated regimen that can be administered on an outpatient basis (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002). Irinotecan (CPT-11, Camptosar; Pfizer Oncology, New York, USA) inhibits topoisomerase I thereby disrupting DNA replication and cell division within tumour cells. Response rates between 20 and 23% have been reported for irinotecan monotherapy in untreated gastric cancer (Futatsuki et al, 1994; Kohne et al, 2003). In patients who had failed previous therapy, irinotecan (180 mg m−2) combined with 5-FU (400 mg m−2, bolus) and LV (125 mg m−2) followed by 5-FU infusion (1200 mg m−2 over 48 h) yielded a response rate of 29%, while a further 34% of patients achieved SD (Assersohn et al, 2004). Irinotecan with bolus-LV/5-FU in the first-line treatment of gastric cancer provided a response rate of 22% (Blanke et al, 2001). However, this regimen (irinotecan 125 mg m−2, LV 20 mg m−2 plus 5-FU 500 mg m−2, all given weekly for 4 weeks followed by a 2-week rest) was associated with a high incidence of severe diarrhoea (28%) and neutropenia (36%) infection leading to substantial dose modifications (Blanke et al, 2001). By comparison, the combination of irinotecan with continuous rather than bolus infusions of LV/5-FU exhibited a lower incidence of grade 3 and 4 toxicities in colorectal cancer patients (Douillard et al, 2000; Saltz et al, 2000; Bouzid et al, 2003). Therefore, we have investigated a weekly dose of irinotecan (80 mg m−2) in combination with LV (500 mg m−2) and continuous 5-FU (2000 mg m−2 over 24 h) according to the AIO (Arbeitsgemeinschaft Internistische Onkologie) regimen (i.e. ILF) in gastric cancer patients. In a previous phase I study of ILF in the first- and second-line treatment of gastric cancer, we observed a response rate of 20% with a further 36% of patients reporting SD (Moehler et al, 2003). Importantly, toxicity was sufficiently manageable to allow outpatient-based treatment. Therefore, we initiated the present randomised, controlled, phase II study to compare the efficacy and safety of ILF with ELF in the first-line treatment of metastatic gastric cancer.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nEligible patients had untreated histologically proven gastric adenocarcinoma, or adenocarcinoma of the oesophagogastric junction with measurable metastatic disease and/or locally recurrent nodal involvement, were aged between 18 and 75 years with a Karnofsky performance score (KPS) ⩾60 and a life expectancy >12 weeks. Patients were required to have adequate haematological (neutrophils ⩾2.0 × 109 l−1, platelets ⩾150 × 109 l−1; haemoglobin ⩾10g dl−1), hepatic (total bilirubin ⩽1.25 × upper normal limit (UNL); aspartate (AST) and alanine (ALT) aminotransferases ⩽3 × UNL) and renal function (creatinine <1.25 × UNL). Patients with previous cancer therapies were excluded from the study. All patients provided signed and dated consent before entering the trial. The study was conducted in accordance with the principles of the Declaration of Helsinki and Good Clinical Practice guidelines and the protocol was initially approved by the Ethics committee of Aerztekammer Rheinland-Pfalz and later by all Ethics committees responsible for participating centres.\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN AND RANDOMISATION:\nThis was an open-label, multicentre, phase II randomised trial with two treatment arms. Patients were randomly assigned and stratified according to centre, peritoneal involvement (yes/no) and prior gastrectomy (yes/no). The randomisation process was centralised and performed by the Coordination Centre for Clinical Trials (KKS), Mainz, Germany.\n\nBODY.PATIENTS AND METHODS.ADMINISTRATION OF STUDY DRUGS AND DOSE ADJUSTMENT:\nPatients assigned to ILF (Arm A) received irinotecan 80 mg m−2 intravenously (i.v.) over 60–90 min followed by LV 500 mg m−2 i.v. over 60 min and then 5-FU 2000 mg m−2 i.v. over 24 h, on day 1. Each cycle comprised six once-weekly treatments followed by a 13-day rest period. Systemic prophylactic atropine (0.25 mg) injections for irinotecan-related acute cholinergic symptoms were allowed for the first cycle but not recommended. Prophylactic treatment for delayed diarrhoea was not permitted. However, patients were carefully informed of the potential risk of delayed diarrhoea and neutropenia and the need for early intervention with loperamide, metoclopramide, antibiotics, or hospitalisation and parenteral rehydration in case of refractory diarrhoea (>48 h). Antiemetic treatment was performed using metoclopramide or HT-3 antagonists in a sequential manner. The prophylactic use of colony-stimulating factors was not permitted. Patients assigned to ELF (Arm B) received etoposide 120 mg m−2 i.v. over 60 min, LV 300 mg m−2 i.v. over 5–10 min and then 5-FU 500 mg m−2 bolus i.v. over 2–4 min, on day 1. Each cycle comprised three applications on consecutive days (1–3) followed by an 18-day rest. All study treatments were administered until disease progression, unacceptable toxicity or withdrawal of consent. In the event of toxicity (defined by the National Cancer Institute of Canada expanded common toxicity criteria; NCIC-CTC), treatment delays or dose reductions could be applied as follows. If at any time during a cycle there were moderate reductions in haematological function (neutrophil count 0.5–1.5 × 109 l−1, platelet count 25–75 × 109 l−1) or moderate diarrhoea or stomatitis (>grade 1), the next administration could be delayed for up to 2 weeks. If at any time haematological abnormalities were noted (neutrophils <0.5 × 109 l−1, neutrophils <1 × 109 l−1 with infection or fever, platelets <25 × 109 l−1) or if there were ⩾grade 3 or 4 diarrhoea or stomatitis, treatment had to be delayed until recovery to moderate levels (as described above) after which the following dose reductions were applied. For Arm A, 5-FU was reduced to 1600 mg m−2 and irinotecan to 65 mg m−2. For Arm B, in the case of haematological toxicity, etoposide had to be reduced to 100 mg m−2 and 5-FU to 400 mg m−2, and in the case of diarrhoea or stomatitis, 5-FU was reduced to 400 mg m−2. If a condition persisted despite dose reduction, or if a patient experienced myocardial infarction, treatment was terminated. In the case of hand–foot syndrome, the dose of 5-FU was to be reduced by 20%. Delayed diarrhoea was treated immediately with loperamide and rehydration and, if associated with severe neutropenia, a broad-spectrum antibiotic. Hospitalisation with i.v. rehydration was required for grade 4 or persistent (>48 h) diarrhoea, concomitant vomiting, fever, or KPS <60%.\n\nBODY.PATIENTS AND METHODS.STUDY EVALUATIONS:\nAt baseline up to five measurable lesions per organ and 10 lesions in total were to be identified as target lesions, measured using computed tomography (CT), and recorded according to the RECIST system (Response Evaluation Criteria In Solid Tumours; Therasse et al, 2000). The sum of the longest diameters for all target lesions was used as a reference for determining objective tumour response. Tumour responses were evaluated at week 7, week 14 and then every two cycles for patients receiving ILF or every four cycles for patients receiving ELF. Responses were determined according to RECIST as follows: complete response was defined as the disappearance of all target and nontarget lesions with no new lesions and confirmed by two observations at least 4 weeks apart; PR was defined as a reduction of 30% or more in the sums of the longest diameters of all measurable lesions relative to baseline with no new lesions; no change (NC) was defined as neither sufficient shrinkage to qualify for PR nor sufficient increase to qualify for progressive disease (PD) with no new lesions; and PD was defined as ⩾20% increase in the sum of the longest diameters, the occurrence of nontarget lesions (e.g. pleural effusion or ascites) or the appearance of brain metastases independently of performance at sites outside the brain. Safety and tolerability were assessed by regular clinical examinations and assessments of adverse events (weekly, at the end of treatment and at every 3 months of follow-up), disease symptoms, KPS, haematological and biochemical parameters.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSIS:\nThe primary end point was objective clinical response (CR+PR) based on an interim analysis following 14 weeks of treatment. The one-sided exact Fisher's test was used to compare the treatment arms at the significance level α=5%. The analyses were performed on an intention-to-treat basis including all patients who were treated in the study. The secondary end points were ORR (for the entire treatment period), time to progression, tumour growth control, time to treatment failure (including progression, death or withdrawal) and survival. Time to event data were described by Kaplan–Meier estimates and treatment groups were compared by log-rank test. Time to event data were further evaluated by appropriate proportional Cox's models and results were summarised by hazard ratio point and 95% confidence interval (CI) estimates, and P-values of Wald χ2 test. Binary data were described by risk ratio (RR) point and 95% CI estimates and treatment groups were compared by exact Fisher's test. Binary data were further evaluated by appropriate logistic regression models and were summarised by odds ratio point and 95% CI estimates and P-values of Wald χ2 test. If not specified otherwise, P-values are presented from two-sided tests and two-sided 95% CI are presented. All analyses were performed using SAS version 6.12.\n\nBODY.RESULTS.PATIENT CHARACTERISTICS:\nIn all, 120 patients from 17 centres in Germany were randomised into the study between November 2000 and April 2003. Two patients from Arm A and four from Arm B withdrew without receiving study treatment; therefore, the intention-to-treat population contained 114 patients (56 received ILF and 58 received ELF). The baseline characteristics were well balanced between the two treatment groups (Table 1). The median age of patients was 62 years and the median KPS was 90%. In around one-third (31%) of patients, the primary tumour site was the oesophagogastric junction, 62% of patients had liver metastases and in 77% two or more organs were involved.\n\nBODY.RESULTS.RESPONSE RATES:\nThe objective clinical response rates following 14 weeks of treatment (primary end point) were 30% for ILF compared with 17% for ELF (RR 0.57, 95% CI 0.29–1.13, P=0.0766). The ORRs for the entire treatment period and including all assessments prior to discontinuation were 43% (24 of the 56) for ILF and 24% (14 of the 58) for ELF (Table 2). The increased response rate provided by ILF compared with ELF was statistically significant (RR=0.56; 95% CI=0.33–0.97; P=0.0467.) The tumour control rates (CR+PR+NC) were 63% (35 of the 56) and 41% (24 of the 58), respectively. Logistic regression analysis indicated that a baseline KPS ⩽80% reduced the likelihood of a response by 59% compared with patients whose KPS was greater than 80% (P=0.038) (Table 3). After adjustment for KPS, peritoneal involvement and surgery for primary tumour, the regression model also demonstrated that ILF was 138% more likely to provide a response when compared with the ELF regimen (Table 3, P=0.042).\n\nBODY.RESULTS.PROGRESSION-FREE SURVIVAL, TREATMENT FAILURE AND OVERALL SURVIVAL:\nAt the last data cutoff, the median follow-up was 9.4 months in Arm A and 5.8 months in Arm B. At this time, 96 of the 114 patients had died. Disease progression was the major cause of death and accounted for 79% of patients in both treatment groups. One patient from the ILF arm did not comply with the provided recommendations for the treatment of prolonged grade 3 diarrhoea and consequently died (i.e. toxic death). One patient in the ELF arm died from a cardiovascular event. Compared with ELF, the ILF regimen extended median progression-free survival, median time to treatment failure and median overall survival (Table 4). However, when the treatment groups were compared by log-rank test, there was no significant difference between the two treatments for any of these parameters (e.g. the Kaplan–Meier survival plot as shown in Figure 1). Investigational analyses found that the risk of progression was increased in patients with a primary tumour in the oesophagogastric junction and in those with metastatic involvement in two or more organs (Table 3). As would be expected, the risk of death was increased in patients with a low KPS, in those with two or more involved organs and in those with peritoneal involvement who received ELF (Table 3).\n\nBODY.RESULTS.SAFETY:\nThe median number of cycles administered in the study was two for ILF (Arm A) and three for ELF (Arm B) (Table 5). Although the median treatment duration period was over twice as long with ILF than with ELF, there were more dose administration delays (70%) and dose reductions (75%) with ILF than with ELF (52 and 45%, respectively). The main reason for discontinuing study treatment was disease progression; 54% of patients receiving ILF and 72% receiving ELF. Although only one patient in each treatment group withdrew because of treatment-related toxicity, five patients receiving ILF and three patients receiving ELF either withdrew consent or refused further treatment. The incidence of grade 3/4 haematological toxicities was low in both treatment groups with the exception of neutropenia, which was reported by 57% of patients receiving ELF (Table 6). There were more grade 3/4 gastrointestinal toxic events with ILF, notably diarrhoea, which was reported by 18% of patients compared with no reports with ELF. Grade 3/4 alopecia was reported by a significant proportion of patients receiving ELF (28%), but was only seen in 5% of those receiving ILF.\n\nBODY.DISCUSSION:\nAlthough chemotherapy regimens offer at best a slight, albeit statistically significant, improvement in survival for patients with gastric cancer, they are associated with a degree of toxicity that limits their value as a palliative treatment (Vanhoefer et al, 2000; Schoffski, 2002; Diaz-Rubio, 2004). The primary end point of clinical response at 14 weeks was selected so that a statistical comparison at a fixed time point could be made. However, as it is the convention in such studies, patients were treated until progression, and could respond to treatment at a later point. Therefore, the overall response and survival rates obtained from the entire dosing period provide a more clinically significant assessment of the efficacy of these regimens for discussion in relation to other trials in gastric cancer. The irinotecan-based combination provided again a greater ORR than that seen with the commonly used ELF regimen (43 vs 24%, respectively, P=0.0467). Overall response rates for ELF reported in previous studies range from 9 to 23% (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002) and this compares well with the 24% response rate reported in this study. Accordingly, an ORR of nearly 50% for ILF, as seen in this study, is a substantial improvement and is in the range of previous reports of the use of this drug combination in this setting (Blanke et al, 2001; Moehler et al, 2003). The overall survival data in the present study also compare well with those from previous studies. The median overall survival with ELF has been reported at 7.2 and 8.0 months (Vanhoefer et al, 2000; Schulze-Bergkamen et al, 2002), which is similar to both the 8.3 months reported here and the data reported for irinotecan-based regimens in the second-line setting between 7.0 and 7.6 months (Moehler et al, 2003; Assersohn et al, 2004). By comparison, there was a nonsignificant trend for increased median survival with ILF in this study (10.8 months) and this compares well with data reported for more recent exploratory combinations such as capecitabine and docetaxel (10.5 months), and epirubicin, docetaxel and cisplatin (11.0 months) (Lee et al, 2004; Park et al, 2004). The same can be said of the progression-free survival period in the ILF group of 4.5 months, which compared well with the 4.1 and 5.2 months reported recently for docetaxel-based regimens (Lee et al, 2004; Park et al, 2004). In other randomised phase II studies, continuous 5-FU/LV infusion plus irinotecan has also provided promising efficacy (ORRs of 40–42%, median progression-free survival periods of 6.5–6.9 months and median overall survival periods of 10.7–11.3 months; Bouche et al, 2004; Pozzo et al, 2004). Consequently, large phase III studies are being considered to investigate irinotecan in combination with continuous 5-FU/LV infusion regimens. When patient histories, disease status and other factors were examined for their effects on clinical outcome, those patients who were in better general health (good performance status, low tumour burden) were more likely to achieve a response and less likely to have a progression event or die, regardless of the treatment arm to which they were randomised. Patients with peritoneal involvement at presentation have a generally poorer prognosis and as a group face a desperate need for improved treatment options (Rau et al, 1996). The data from this study demonstrated that these patients are less likely to suffer a fatal event if treated with ILF rather than ELF. This is potentially an important observation for the management of these difficult to treat patients (Blot et al, 1991; Bray et al, 2002). The extension of meaningful survival remains a major objective for oncologists who must therefore consider the impact of treatment-related toxicity. Overall, the occurrence of the toxicities in this study was consistent with the safety profiles of irinotecan, etoposide and 5-FU/LV. The ILF combination was well tolerated with a low and acceptable incidence of haematological toxicity. Gastrointestinal toxicity is a recognised side effect of ILF therapy (Douillard et al, 2000; Saltz et al, 2000), which can require hospitalisation and urgent medical intervention (Rothenberg et al, 2001). The incidence of grade 3 or 4 diarrhoea in the current study was comparable to the previous data in gastric cancer (Blanke et al, 2001; Moehler et al, 2003; Pozzo et al, 2004). With close monitoring of the patient, suitable medication and rehydration, most cases of diarrhoea can be managed effectively and do not present a significant obstacle to the clinical use of ILF. The toxicity observed in our study was lower than that reported by Douillard et al in the pivotal European first-line trial where patients with colorectal cancer received weekly irinotecan (80 mg m−2) plus an AIO-based regimen of 24-h high-dose 5-FU (2300 mg m−2) preceded by 2-h LV 500 mg m−2, and grade 3/4 diarrhoea was reported by 44% of patients (Douillard et al, 2000). The lower toxicity in our study might be due to the lower daily doses of 5-FU (2000 mg m−2 administered over 24 h). Work is ongoing to identify those patients who carry a specific genetic polymorphism in one of the main enzymes (UGT1A1) involved in the detoxification of irinotecan and are therefore more susceptible to the side effects of irinotecan (Mathijssen et al, 2001). Such work will improve the targeting of this useful therapy and may allow appropriate prescriptive dosing schedules on an individual basis. The present study concurs with similar phase II studies in that the combination of irinotecan with continuous LV/5-FU (ILF) represents a potentially valuable new treatment option for metastatic gastric cancer and requires further evaluation (Bouche et al, 2004; Pozzo et al, 2004).\n\n**Question:** Compared to Etoposide with 5-fluorouracil plus leucovorin (ELF) what was the result of Irinotecan with high-dose 5-fluorouracil plus leucovorin (ILF) on Grade 3/4 digestive system toxic events?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
361
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 320/9 μg on Dyspnoea score improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
365
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 320/9 μg on Sleep score improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
461
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Artemether-Lumefantrine versus Dihydroartemisinin-Piperaquine for Treatment of Malaria: A Randomized Trial\n\n ABSTRACT.OBJECTIVES:\nTo compare the efficacy and safety of artemether-lumefantrine (AL) and dihydroartemisinin-piperaquine (DP) for treating uncomplicated falciparum malaria in Uganda.\n\nABSTRACT.DESIGN:\nRandomized single-blinded clinical trial.\n\nABSTRACT.SETTING:\nApac, Uganda, an area of very high malaria transmission intensity.\n\nABSTRACT.PARTICIPANTS:\nChildren aged 6 mo to 10 y with uncomplicated falciparum malaria.\n\nABSTRACT.INTERVENTION:\nTreatment of malaria with AL or DP, each following standard 3-d dosing regimens.\n\nABSTRACT.OUTCOME MEASURES:\nRisks of recurrent parasitemia at 28 and 42 d, unadjusted and adjusted by genotyping to distinguish recrudescences and new infections.\n\nABSTRACT.RESULTS:\nOf 421 enrolled participants, 417 (99%) completed follow-up. The unadjusted risk of recurrent falciparum parasitemia was significantly lower for participants treated with DP than for those treated with AL after 28 d (11% versus 29%; risk difference [RD] 18%, 95% confidence interval [CI] 11%–26%) and 42 d (43% versus 53%; RD 9.6%, 95% CI 0%–19%) of follow-up. Similarly, the risk of recurrent parasitemia due to possible recrudescence (adjusted by genotyping) was significantly lower for participants treated with DP than for those treated with AL after 28 d (1.9% versus 8.9%; RD 7.0%, 95% CI 2.5%–12%) and 42 d (6.9% versus 16%; RD 9.5%, 95% CI 2.8%–16%). Patients treated with DP had a lower risk of recurrent parasitemia due to non-falciparum species, development of gametocytemia, and higher mean increase in hemoglobin compared to patients treated with AL. Both drugs were well tolerated; serious adverse events were uncommon and unrelated to study drugs.\n\nABSTRACT.CONCLUSION:\nDP was superior to AL for reducing the risk of recurrent parasitemia and gametocytemia, and provided improved hemoglobin recovery. DP thus appears to be a good alternative to AL as first-line treatment of uncomplicated malaria in Uganda. To maximize the benefit of artemisinin-based combination therapy in Africa, treatment should be integrated with aggressive strategies to reduce malaria transmission intensity.\n\nBODY.INTRODUCTION:\nIn Africa, treatment of uncomplicated malaria is undergoing dramatic changes. In response to widespread resistance of the parasite to commonly used monotherapies, particularly chloroquine (CQ) and sulfadoxine-pyrimethamine (SP), many countries have recently adopted artemisinin-based combination therapy (ACT) as a first-line regimen for the treatment of uncomplicated malaria [1]. In Uganda, artemether-lumefantrine (AL) was chosen to replace the combination of CQ + SP as the first-line regimen for malaria in 2004, with amodiaquine + artesunate (AQ + AS) offered as an alternative regimen. Although data on AL were limited at that time, subsequent studies from Uganda [2,3] and elsewhere in Africa [4–6] showed that AL was highly efficacious and well tolerated for uncomplicated malaria. Indeed, in the two published studies of AL from Uganda the risk of treatment failure due to recrudescence was reported to be less than 2% [2,3]. However, despite the promise of AL, there are substantial limitations to this regimen, including twice-daily dosing and need for administration with fatty food. A concern with all antimalarial regimens, particularly in areas of high transmission, is frequent recurrence of malaria after therapy. This concern was highlighted in a recent study comparing efficacies of the ACTs AL and AQ + AS in Tororo, Uganda, an area with very high malaria transmission [3]. In that study, both regimens were highly efficacious for eradication of infections but risks of reinfection were extremely high, with 50% of AL-treated and 66% of AQ + AS–treated patients developing recurrent parasitemia within 28 d. Dihydroartemisinin-piperaquine (DP) is a fixed-dose ACT that has recently become available in Africa. In studies from Southeast Asia, DP appeared to be well tolerated and highly efficacious against multidrug-resistant falciparum malaria [7–11]. However, the epidemiology of malaria and patterns of antimalarial drug use are quite different in Africa than in Asia [12,13]. In the only published study evaluating DP in Africa, DP was highly efficacious with a good safety and tolerability profile at three sites in Rwanda [14]. To compare the performance of DP with the new first-line therapy in Uganda, we conducted a randomized clinical trial comparing the efficacy and safety of AL and DP for the treatment of uncomplicated falciparum malaria in Apac, an area of extremely high transmission intensity.\n\nBODY.METHODS.STUDY SITE:\nThe study was conducted at Aduku Health Centre, Apac District, Uganda. The district experiences perennial holoendemic malaria. The entomological inoculation rate in Apac, a measure of transmission intensity, was measured at 1,564 infectious bites per person per year [15]. The study protocol was approved by the Makerere University Research and Ethics Committee, the Uganda National Council of Science and Technology, and the University of California San Francisco Committee for Human Research.\n\nBODY.METHODS.PARTICIPANTS:\nConsecutive patients presenting to the health center with symptoms suggestive of malaria and a positive screening thick blood smear were referred to study physicians for further assessment. Patients were enrolled if they fulfilled the following selection criteria: (1) age 6 mo to 10 y; (2) weight ≥ 5 kg; (3) history of fever in the last 24 h or axillary temperature ≥ 37.5 °C; (4) no history of serious side effects to study medications; (5) no evidence of a concomitant febrile illness; (6) provision of informed consent by a parent or guardian; (7) no danger signs or evidence of severe malaria; and (8) Plasmodium falciparum monoinfection with parasite density 2,000–200,000/μl of blood. Because laboratory results were generally not available until the following day, a patient could be excluded after randomization.\n\nBODY.METHODS.PROCEDURES:\nAt enrollment, we asked children's parents or guardians about prior antimalarial therapy, use of other medications, and presence of common symptoms. Axillary temperature and weight were measured, and a physical examination was performed. A brief neurological assessment, consisting of simple clinical tests for fine finger dexterity (ability to pick up a small object), was undertaken. We also obtained blood by fingerprick for thick and thin blood smears, for hemoglobin assessment, and to store on filter paper for molecular analysis. Patients were asked to return for follow-up on days 1, 2, 3, 7, 14, 21, 28, 35, and 42, and any other day that they felt ill. Follow-up evaluation consisted of a standardized history and physical examination, including neurological assessment on all days of follow-up. We obtained blood by fingerprick for thick blood smears and storage on filter paper on all follow-up days except day 1. Hemoglobin measurement was repeated on day 42 or the day of recurrent symptomatic malaria. If patients did not return for follow-up, they were visited at home. Blood smears were stained with 2% Giemsa for 30 min. Parasite densities were determined from thick blood smears by counting the number of asexual parasites per 200 white blood cells (WBCs), or per 500 if the count was less than 10 parasites/200 WBCs, assuming a WBC count of 8,000/μl. A smear was considered negative if no parasites were seen after review of 100 high-power fields. We also assessed gametocytemia from thick blood smears. Thin blood smears were reviewed for non-falciparum infections. A second microscopist, who was unaware of the results of the first reading, re-read all slides. A third microscopist unaware of the first two readings resolved discrepant slides. Hemoglobin measurements were made using a portable spectrophotometer (HemoCue, http://www.hemocue.com).\n\nBODY.METHODS.INTERVENTIONS:\nOn day 0, patients were randomly assigned to receive AL or DP. A nurse administered study medications according to weight-based guidelines for administration of fractions of tablets. We administered all drugs orally as follows: AL (Coartem, Novartis, 20 mg artemether/120 mg lumefantrine tablets), administered according to weight as one (5–14 kg), two (15–24 kg), three (25–34 kg), or four (≥ 35 kg) tablets given twice daily for 3 d; DP (Duocotexin, Holley Pharm, 40 mg dihydroartemisinin/320 mg piperaquine tablets), targeting a total dose of 6.4 and 51.2 mg/kg of dihydroartemisinin and piperaquine, respectively, given in three equally divided daily doses to the nearest quarter tablet. We used a pill cutter to ensure that the tablet fractions were as close to the nearest quarter tablet as possible. Participants in the DP group also received placebo tablets administered in the evening over 3 d to simulate the AL dosing schedule. Study medications were administered with water, and patients were given a glass of milk after each dose of study medication. All treatment was directly observed. Participants were given the option either to wait at the clinic for the evening dose (lunch was provided) or to leave the clinic and return in the evening (transport was provided). After each dose, children were observed for 30 min, and the dose was readministered if vomiting occurred. All patients were provided with a 3 d supply of acetaminophen for treatment of febrile symptoms. Children with hemoglobin of less than 10 g/dl were treated according to Integrated Management of Childhood Illness guidelines with ferrous sulfate for 14 d and antihelminthic treatment if appropriate. Households of all patients were given two long-lasting insecticide-treated bed nets (ITNs) (PermaNet, Vestergaard Frandsen, http://www.vestergaard-frandsen.com) on the day of enrollment, with instructions for one net to be used by the study patient.\n\nBODY.METHODS.OBJECTIVES:\nThe objectives of the study were to compare the efficacy and safety of AL and DP for the treatment of uncomplicated falciparum malaria at a high transmission–intensity site in Uganda.\n\nBODY.METHODS.OUTCOMES: EFFICACY:\nTreatment outcomes were classified according to 2006 World Health Organization (WHO) guidelines as early treatment failure (ETF; danger signs or complicated malaria or failure to adequately respond to therapy days 0–3); late clinical failure (LCF; danger signs or complicated malaria or fever and parasitemia on days 4–42 without previously meeting criteria for ETF or LPF); late parasitological failure (LPF; asymptomatic parasitemia days 7–42 without previously meeting criteria for ETF or LCF); or adequate clinical and parasitological response (absence of parasitemia on day 42 without previously meeting criteria for ETF, LCF, or LPF) [16]. Patients were treated with quinine sulfate (10 mg/kg three times daily for 7 d) on the day that they fulfilled criteria for early treatment failure or late clinical failure. Patients with late parasitological failure were followed, and were given quinine only if they developed fever with parasitemia or remained parasitemic on the last day of follow-up. Patients were excluded from further follow-up after enrollment if any of the following occurred: (1) use of antimalarial drugs outside of the study protocol; (2) withdrawal of consent; (3) loss to follow-up; (4) protocol violation; or (5) death due to a nonmalarial illness. The primary efficacy outcomes were the 28- and 42-d risks of early treatment failure or recurrent parasitemia (LCF or LPF), unadjusted and adjusted by genotyping. Secondary efficacy outcomes included prevalence of fever and parasitemia during the first 3 d of follow-up, change in mean hemoglobin from day 0 to day 42 or day of repeat therapy, and prevalence of gametocytemia (presence of gametocytes on thick smears) during follow-up in participants lacking gametocytes at enrollment. Molecular genotyping techniques were used to distinguish recrudescent from new infections for all patients with LCF or LPF response. Briefly, parasite DNA was isolated from filter paper blood samples collected at enrollment and on the day of recurrent parasitemia using chelex extraction. Paired samples were genotyped in a stepwise fashion using merozoite surface protein (msp)-2, msp-1, and four microsatellites [17]. If, for any of the six loci, an allele was not shared between day 0 and day of recurrence, the infection was classified as a new infection. If at least one allele was shared between day 0 and day of recurrence at all six loci, the infection was classified as a possible recrudescence. The term \"possible recrudescence\" was used because the complexity of infection (number of infecting parasite strains) was very high in our setting, making it difficult to distinguish definitively a true recrudescence from a new infection.\n\nBODY.METHODS.OUTCOMES: SAFETY:\nSecondary safety outcomes included risks of serious adverse events and common adverse events of any severity. An adverse event was defined as any untoward medical occurrence, irrespective of its suspected relationship to the study medications [18]. At each follow-up visit, patients were assessed for any new or worsening event. All events were graded by severity (none, mild, moderate, severe, life-threatening) and relationship to study treatment (none, unlikely, possible, probable, or definite) using guidelines from the World Health Organization (Toxicity Grading Scale for Determining the Severity of Adverse Events) [19]and the United States National Institutes of Health, Division of Microbiology and Infectious Diseases (Pediatric Toxicity Tables, May 2001) [20]. A serious adverse event was defined as any adverse experience that resulted in death, life-threatening experience, inpatient hospitalization, persistent or significant disability or incapacity, or specific medical or surgical intervention to prevent serious outcome.\n\nBODY.METHODS.SAMPLE SIZE:\nWe calculated sample size to test the hypothesis that the risk of recurrent parasitemia after 42 d would differ between the two treatment groups. Based on previous data, the risk of recurrent parasitemia (unadjusted by genotyping) after 42 d was estimated to be 50% after treatment with AL [3]. Using this estimate, we calculated that 200 patients (allowing for 10% loss to follow-up) would need to be enrolled in each treatment arm to detect a 15% risk difference between the treatment groups with a two-sided type I error of 0.05 and power of 80%.\n\nBODY.METHODS.RANDOMIZATION: SEQUENCE GENERATION, ALLOCATION CONCEALMENT, IMPLEMENTATION:\nA randomization list was computer generated by an off-site investigator without the use of blocking or stratification. Sequentially numbered, sealed envelopes containing the treatment group assignments were prepared from the randomization list. The study doctors assigned treatment numbers sequentially and the study nurse allocated treatment by opening the envelope corresponding to the treatment number. The randomization codes were secured in a locked cabinet accessible only by the study nurse. Participants were enrolled by the study physicians, and treatments were assigned and administered by the study nurse.\n\nBODY.METHODS.BLINDING:\nOnly the study nurse was aware of treatment assignments. All other study personnel, including the study physicians and laboratory personnel involved in assessing outcomes, were blinded to the treatment assignments. Patients were not informed of their treatment regimen, but the color of the two study medications was not the same (DP and placebo tablets were light blue; AL tablets were light yellow).\n\nBODY.METHODS.STATISTICAL METHODS:\nData were entered and verified using Epi Info version 6.04 and analyzed using STATA version 8.0 (STATA, http://www.stata.com). Efficacy and safety data were evaluated using a modified intention-to-treat analysis which included all patients who fulfilled enrollment criteria. Patients who were randomized to therapy but not enrolled in the study due to laboratory results available on day 1 were not included in the analysis. Risks of recurrent parasitemia at 28 and 42 d of follow-up (adjusted and unadjusted by genotyping) were estimated using the Kaplan-Meier product limit formula. Data were censored for patients who did not complete follow-up and for new infections when estimating outcomes adjusted by genotyping. Patients with LCF or LPF due to non-falciparum species were censored as non-failures at the time they were classified as LCF or LPF. The Z-test was used to compare the Kaplan-Meier estimates of treatment efficacy at fixed points in time between the treatment groups. Confidence intervals around the difference between Kaplan-Meier estimates were calculated using normal approximation and Greenwood's estimates of standard errors. Categorical variables were compared using Chi-squared or Fisher exact test and continuous variables were compared using the independent samples t-test. All reported p-values are two sided without adjustment for multiple testing and were considered statistically significant if below 0.05.\n\nBODY.RESULTS.PARTICIPANT FLOW:\nOf 572 patients screened, 509 were randomized to treatment, and 421 were enrolled in the study (Figure 1). Primary efficacy outcomes, unadjusted and adjusted by genotyping, were available for 417 (99%) and 416 (99%) enrolled participants, respectively. One patient with an unsuccessful genotyping result in the AL group was not included in the analysis when adjusting for genotyping. Figure 1Trial Profile\n\nBODY.RESULTS.RECRUITMENT:\nThe study was conducted between March and July 2006.\n\nBODY.RESULTS.BASELINE DATA:\nAmong patients enrolled in the study, there was no difference at baseline of gender, age, temperature, parasite density, hemoglobin, or recent antimalarial use between the two treatment groups (Table 1). Among patients treated with AL, the mean total doses (standard deviation [SD]) were 13.1 (2.8) mg/kg for artemether and 78.8 (17.1) mg/kg for lumefantrine. Among patients treated with DP, the mean total doses (SD) were 7.3 (1.0) mg/kg for dihydroartemisinin and 58.4 (8.0) mg/kg for piperaquine. Table 1 Baseline Characteristics of Participants According to Treatment Group \n\nBODY.RESULTS.OUTCOMES AND ESTIMATION.PRIMARY EFFICACY OUTCOMES.:\nThere were no early treatment failures in the first 3 d following initiation of therapy. Episodes of recurrent parasitemia were first detected 14 d following therapy in the AL arm and 21 d following therapy in the DP arm (Figure 2; Table 2). The risk of recurrent falciparum parasitemia unadjusted by genotyping was significantly lower for participants treated with DP than for those treated with AL after 28 d of follow-up (11% versus 29%; risk difference [RD] = 18%, 95% confidence interval [CI] 11%–26%) and after 42 d of follow up (43% versus 53%; RD = 9.6%, 95% CI 0%–19%) (Table 3). Similar trends were seen when results were adjusted by genotyping. The risk of recurrent parasitemia due to possible recrudescence was significantly lower for participants treated with DP than for those treated with AL after 28 d of follow-up (1.9% versus 8.9%; RD = 7.0%, 95% CI 2.5%–12%) and after 42 d of follow up (6.9% versus 16%; RD = 9.5%, 95% CI 2.8%–16%) (Table 3). Among patients with recurrent parasitemia, the median time to recurrent parasitemia was significantly shorter in patients treated with AL compared to patients treated with DP (28 d versus 35 d, p < 0.0001). Additionally, patients treated with AL had a higher risk of recurrent parasitemia due to non-falciparum species (all were either P. malariae or P. ovale) compared to patients treated with DP (5.2% versus 0.9%, p = 0.01). Results were similar when restricting the analyses to children under the age of 5 y, as 94% of patients enrolled were in this age range. Figure 2Cumulative Risk of Recurrent Parasitemia Stratified by Treatment Group Table 2 WHO Treatment Outcomes after 42 Days of Follow-Up Table 3 Estimates of Comparative Efficacy Patients with asymptomatic recurrent parasitemia (LPF) were not treated unless they developed symptomatic malaria or reached the end of the 42-d follow-up period. Among 121 patients with LPF occurring before day 42, only six (5%) spontaneously cleared their parasites without treatment, 86 (53/75, 71% in AL group and 33/46, 72% in DP group) went on to develop symptomatic malaria by day 42, 28 (23%) had persistent asymptomatic parasitemia at day 42, and one (1%) took other antimalarials prior to day 42. Overall, 81 (39%) of 210 patients treated with AL went on to develop recurrent symptomatic malaria, compared to 52 (25%) of 211 patients treated with DP (p = 0.002).\n\nBODY.RESULTS.OUTCOMES AND ESTIMATION.SECONDARY EFFICACY OUTCOMES.:\nThe prevalence of fever (either subjective or documented) was similar over the first 3 d of follow-up in the two treatment groups. Both treatments produced rapid clearance of parasitemia with no parasites detected by day 3 (Table 4). The appearance of gametocytes not present at enrollment was significantly lower over the last 4 wk of follow-up in DP group (Table 4). Patients treated with DP had a higher mean increase in hemoglobin levels, which was of borderline statistical significance (1.9 versus 1.5 g/dl, p = 0.05). However, among patients with recurrent parasitemia there was no difference in the prevalence of anemia (Hb <10 g/dl) on the day of failure in the AL group (33/117, 28%) compared to the DP group (25/92, 27%) (p = 0.87). Table 4 Secondary Outcomes \n\nBODY.RESULTS.ADVERSE EVENTS:\nBoth drugs were well tolerated. Most adverse events were of mild or moderate severity and consistent with symptoms due to malaria. Overall, there was no difference in the proportion of study participants who experienced any adverse event of moderate or greater severity between the DP (46%) and AL (42%) treatment groups (p = 0.47). There was also no difference in the proportion of patients who experienced common adverse events of any severity (Table 4). Serious adverse events occurred in six participants and included three febrile convulsions, one case of acute otitis media, one acute asthma attack, and one case of pyomyositis. All serious adverse events were judged to be unrelated to study medications. No patients were withdrawn from the trial for drug-induced vomiting that would have required alternative treatment.\n\nBODY.DISCUSSION.INTERPRETATION:\nIn this randomized clinical trial, AL and DP were both highly efficacious at initial clearance of parasitemia and well tolerated for treatment of uncomplicated malaria in Apac, Uganda, an area with extremely high malaria transmission intensity. Importantly, DP-treated patients had a significantly lower risk of recurrent parasitemia in both falciparum and non-falciparum infections. Accurate distinction between recrudescent and new falciparum infections following therapy was challenging due to the complexity of infection in this high transmission setting, despite the use of 6-locus genotyping. However, DP clearly offered better post-treatment prophylactic effect following therapy compared to AL and our data suggest a reduced risk of treatment failure due to recrudescent parasites. DP also offered other benefits, including a lower risk of gametocytemia after therapy and better hemoglobin recovery. The significantly lower risk of recurrent parasitemia after treatment with DP compared to AL is likely explained by differences in the terminal elimination half-lives of the two partner drugs. Piperaquine, a bisquinoline, is estimated to have an elimination half-life of 2–3 wk [21] compared to lumefantrine, a quinoline, which has an estimated elimination half-life of 4–10 d [22]. Selecting the ideal partner drug to combine with artemisinins in ACT regimens remains a challenge. Extended elimination half-lives may provide better post-treatment prophylaxis, but may also increase the risk for the selection of drug-resistant parasites, especially in areas of intense malaria transmission [23]. Resistance may develop to partner drugs during the elimination phase of the drug (due to their longer half-life), when newly infecting parasites are exposed to subtherapeutic levels of the drug. As ACT use becomes widespread in areas with high levels of malaria transmission, it will be important to monitor closely for the selection of parasites that are resistant to artemisinin partner drugs, since the benefits of a regimen that offers decreased recurrent infection must be balanced with the consequences of increased selection of resistant parasites.\n\nBODY.DISCUSSION.GENERALIZABILITY:\nOur study was conducted in an area of very high malaria transmission and highlights the importance of level of transmission in determining the overall efficacy of an antimalarial treatment regimen [24]. In a high-transmission setting, differences in the post-treatment prophylactic effect of ACTs may have a significant impact on the timing and frequency of recurrent episodes of malaria, as seen in this study. In areas with lower transmission intensity, a drug's post-treatment prophylactic effect would be expected to be of lesser importance. This is illustrated in a study from three sites in Rwanda with differing transmission intensity [14]. At a periurban site with relatively low transmission intensity, the risk of recurrent parasitemia after 28 d was equally low in patients treated with DP (4%) and patients treated with AQ + AS (7%). In contrast, at two rural sites with high transmission intensity, the risk of recurrent parasitemia was significantly lower with DP (12%) compared to AQ + AS (23%). One limitation of this study was the difficulty of accurately distinguishing recrudescence from new infections among patients with recurrent parasitemia, due to the high complexity of infection. Among episodes classified as \"possible recrudescence,\" the mean complexity of infection was over five clones on day 0 and approximately four clones on the day of recurrent parasitemia. Even using six-locus genotyping, the probability of a new infection being misclassified as a recrudescence may be relatively high. Thus, the reported 42-d risks of recurrent parasitemia adjusted by genotyping of 16% in the AL treatment arm and 6.9% in the DP treatment arm likely overestimate the true risks of recrudescence. In this study we followed the new WHO outcome classification system where a patient is classified as a failure after the first reappearance of parasitemia, regardless of whether the patient is symptomatic [16]. Previously, patients with asymptomatic parasitemia following therapy were distinguished from those with recurrent symptomatic malaria based on the assumption that asymptomatic parasitemia may not be clinically important in areas of high transmission intensity [25]. However, in this study, where semi-immunity is expected to be high, the vast majority of patients with asymptomatic parasitemia following therapy went on to develop symptomatic malaria, strongly supporting the new WHO protocol recommendations and suggesting the weakness of natural immunity in clearing these parasites.\n\nBODY.DISCUSSION.OVERALL EVIDENCE:\nIn this study, DP was shown to offer benefits over AL, including lower risks of recurrent parasitemia and gametocytemia following therapy and improved hemoglobin recovery. Both DP and AL are fixed-dose coformulated ACTs. However, DP has a simpler, once-daily dosing schedule compared to AL, which is provided twice daily, ideally with a fatty meal. Our results could have important policy implications. Currently AL is the recommended first-line therapy for treatment of uncomplicated malaria in Uganda, with AQ + AS recommended as an alternative if AL is not available. DP is now registered for use in Uganda, and appears to offer an additional highly efficacious ACT for our limited antimalarial armamentarium. In studies from Southeast Asia, DP appears to be well tolerated and highly efficacious against multidrug-resistant falciparum malaria [7–11]. In the only published study evaluating DP in Africa, DP had a cure rate of over 95% and had a significantly lower risk of recurrent parasitemia and adverse events compared to AQ + AS and AQ + SP in Rwanda [14]. Based on available data, DP warrants serious consideration as a first-line therapy for uncomplicated malaria in Africa. DP may also have a role for presumptive treatment of fever through the program for home-based management of fever (HBMF). Currently prepackaged CQ + SP (Homapak) is being distributed in the HBMF program. However, the CQ + SP combination is no longer efficacious [24], and there are plans to incorporate ACTs in the future. Use of DP for HBMF might be attractive because of its simple dosing schedule. There are also potential advantages of having more than one first-line therapy, one for facility-based management of malaria and another for HBMF, in reducing the selective pressure of using one ACT. Despite the excellent initial parasite clearance of both ACT regimens in this study and the provision of ITNs at enrollment, approximately half of all participants experienced recurrent parasitemia within 42 d. This finding emphasizes the need for more aggressive approaches to malaria control in areas with very high malaria transmission. A study done in an area of South Africa with lower-intensity transmission found that the combination of vector control measures including indoor residual spraying and provision of AL dramatically decreased the malaria burden [26]. In order to reduce new malaria infections in our study population, we anticipate that combining several malaria control measures, including treatment with ACTs, provision of ITNs (with education about their use), and potential use of indoor residual spraying, as in South Africa, will likely decrease the malaria burden and reduce drug pressure due to repeated use of ACTs. Monitoring of the impact of these combined control measures will be critical to assess our success in malaria control in Uganda.\n\nBODY.SUPPORTING INFORMATION:\nCONSORT ChecklistAL versus DP Trial, Apac Uganda(49 KB DOC)Click here for additional data file. Trial ProtocolComparison of AL and DP for Treatment of Uncomplicated Malaria in Uganda: Evaluation of Efficacy, Safety, and Tolerability at Three Sites with Varying Transmission Intensity(1.1 MB DOC)Click here for additional data file.\n\n**Question:** Compared to artemether-lumefantrine what was the result of dihydroartemisinin-piperaquine on unadjusted risk of recurrent falciparum parasitemia?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
328
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 320/9 μg on Baseline-adjusted average 12-hour FEV1 on the day of randomization?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
386
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Effects of Bladder Training and/or Tolterodine in Female Patients with Overactive Bladder Syndrome: A Prospective, Randomized Study\n\n ABSTRACT:\nWe compared the effects of bladder training and/or tolterodine as first line treatment in female patients with overactive bladder (OAB). One hundred and thirty-nine female patients with OAB were randomized to treatment with bladder training (BT), tolterodine (To, 2 mg twice daily) or both (Co) for 12 weeks. Treatment efficacy was measured by micturition diary, urgency scores and patients' subjective assessment of their bladder condition. Mean frequency and nocturia significantly decreased in all treatment groups, declining 25.9% and 56.1%, respectively, in the BT group; 30.2% and 65.4%, respectively, in the To group; and 33.5% and 66.3%, respectively in the Co group (p<0.05 for each). The decrease in frequency was significantly greater in the Co group than in the BT group (p<0.05). Mean urgency score decreased by 44.8%, 62.2% and 60.2% in the BT, To, and Co groups, respectively, and the improvement was significantly greater in the To and Co groups than in the BT group (p<0.05 for each). Although BT, To and their combination were all effective in controlling OAB symptoms, combination therapy was more effective than either method alone. Tolterodine alone may be instituted as a first-line therapy, but may be more effective when combined with bladder training.\n\nBODY.INTRODUCTION:\nOveractive bladder (OAB), defined as increased frequency and urgency with or without urge incontinence, is a very common condition, especially in middle-aged women, with a reported incidence of approximately 17% of the general female population (1, 2). Since many women may consider OAB as a natural aging phenomenon and thus do not seek treatment, its actual prevalence may be even higher. Despite its common occurrence, however, our understanding (3-5) of the pathophysiology underlying the development of OAB has remained elementary. It is managed most commonly by drug therapy, followed by various behavioral interventions and, less frequently, by surgical methods. Each of these methods, however, has variable efficacy and adverse effects. Classically, urge incontinence has been considered most important, being the key index in treatment outcome assessments. However, since the new consensus by the International Continence Society on the diagnosis of OAB (6), the presence or absence of actual urine leakage was regarded as less important. With the adaptation of this new definition, the number of patients newly diagnosed with OAB is expected to increase, emphasizing the need to develop first-line interventions for their symptoms. The three most commonly employed methods for treating newly-diagnosed OAB are bladder training, anticholinergic therapy and a combination of the two. Here we report a controlled, randomized study comparing the clinical efficacy of these three methods, for the purpose of determining the most appropriate first-line therapeutic modality.\n\nBODY.MATERIALS AND METHODS:\nBetween May 2001 and April 2002, 139 women presenting with symptoms of urgency and frequency, with or without urge incontinence, were prospectively enrolled in this study. Inclusion criteria included being 18 yr of age or older, having a urination frequency of 8 or more times per day, the presence of urge symptoms that may or may not accompany incontinence, symptom duration of 3 months or longer and no prior history of treatment for OAB. We excluded patients with active urinary tract infection, clinically significant stress urinary incontinence, bladder outlet obstruction, interstitial cystitis, glaucoma or megacolon. Also excluded were patients with a maximal urine flow rate of less than 10 mL/sec or a postvoid residual urine amount that was more than 30% of the total amount voided on uroflowmetry. Upon enrollment, patients were randomly assigned to bladder training (BT, n=46), tolterodine treatment (To, 2 mg twice daily, n=47) or a combination of the two (Co, n=46). Each patient received a physical examination and uroflowmetry with postvoid residual urine scan. In addition, each patient kept a record of urgency scores, with 0 indicating no symptoms, 1 rarely, 2 occasionally, 3 often, and 4 all the time, and a frequency-volume chart. At 12 weeks, satisfaction score was assessed, with 0 representing fully satisfied, 1 much improved, 2 slightly improved, 3 no improvement, and 4 worsened; a change of 2 points or more was considered symptom improvement. The frequency-volume chart and a survey of adverse effects of medication were repeated every 4 weeks. Patients in the BT and Co groups were educated of the bladder drills by a nurse specialist. Gross and functional anatomy of the lower urinary tract and pelvis were demonstrated, with instructions on how bladder drills can increase functional bladder capacity and actual voiding habits. Each patient was instructed to examine her frequency-volume chart to determine the longest interval she could initially hold urination and sustain urge symptoms. Each patient was then taught to increase this interval by 15-min increments, with the aim of achieving a 3 to 4 hr interval and a voided volume of 300 to 400 mL. In addition, whenever urgency occurred, patients were educated to perform Kegel exercises for symptom alleviation. All bladder drill education was performed by one nurse specialist, who telephoned each patient every 2 weeks to make sure she was following the exact drill taught. In the Co group, the bladder drill was started at the same time as the start of medication. Of the 139 patients, 89 (64.0%) completed the 12 week treatment regimen and were subjects of outcome analyses. Voiding frequency, nocturia, urgency score before and after the treatment, satisfaction score at 12 weeks and adverse events were measured in the three groups. Student's t-test was used for statistical analysis, and p<0.05 was defined as statistically significant.\n\nBODY.RESULTS:\nPatient characteristics, including age, weight, symptom severity and duration, parity, pretreatment uroflowmetry profile, presence and degree of urge or stress incontinence, and urgency score, were similar among the three treatment groups (Table 1). At 12 weeks, 26 patients (56.5%) in the BT group, 32 (68.1 %) in the To group, and 31 (67.4%) in the Co group had completed the protocol. Mean frequency in the three groups decreased significantly, from 10.9 to 8.1 (25.9%) in the BT group, from 11.6 to 8.1 (30.2%) in the To group, and from 11.9 to 7.9 (33.5%) in the Co group (p<0.05 each) (Fig. 1). Nocturia also decreased significantly in the three groups, from 1.5 to 0.6 per night (56.1%) in the BT group, from 1.7 to 0.6 (65.4%) in the To group, and from 2.0 to 0.6 (66.3%) in the Co group (p<0.05 each) (Fig. 2), as did urgency scores, from 2.6 to 1.4 (44.8%) in the BT group, from 2.8 to 1.1 (62.2%) in the To group, and from 3.0 to 1.2 (60.2%) in the Co group (p<0.05 each) (Fig. 3). When we compared these changes among the three groups, we found that patients in the Co group showed greater improvements in frequency and urgency scores than did patients in the BT group (p<0.05 each), and that urgency scores in the To and Co groups showed significantly greater improvement than in the BT group (p=0.017 and p=0.021, respectively). No difference was observed between the To and Co groups. Satisfaction scores at 12 weeks did not differ among the three groups, being 1.5 in the BT group, with improvement in 53.9%; 1.4 in the To group, with improvement in 63.0%; and 1.3 in the Co group, with improvement in 71.0%. No adverse events were reported from the BT group, whereas 13 patients (40.6%) in the To group and 12 (38.7%) in the Co group reported one or more adverse events (Table 2). Most of the events were mild and well-tolerated, except that severe dry mouth occurred in 2 patients from each group, which led them to discontinue medication (withdrawal rates, 6.3% and 6.5%, respectively).\n\nBODY.DISCUSSION:\nDespite extensive research on the complicated interactions involving the neural network of the cerebrum, sacral cord and the detrusor, little is known regarding the pathologic process that occurs during the development of OAB (4, 5, 7). Accordingly, therapeutic approaches for OAB have been diverse, consisting largely of medical therapy, but also including behavioral interventions such as bladder training, pelvic floor muscle exercises with or without biofeedback, and electrical stimulation. Other therapeutic approaches have included neuromodulation of the sacral nerves, intravesical instillations of antimuscarinics and bladder overdistention, with augmentation cystoplasty remaining as the last resort. Naturally, non-invasive methods, including medical therapy and behavioral intervention, have been the initial treatments of choice for OAB symptoms. Although it is not known how bladder drills control and sustain OAB symptoms, it is believed that ill-trained voiding habits leading to decreased bladder compliance by frequent voiding are corrected by progressively increasing the voiding interval, thus increasing the voided volume and the functional bladder capacity. Retraining these patients to void at normal intervals would restore compliance when capacity increases and may contribute to symptom improvement (8-11). Observations on the electrical stimulation of the pelvic floor muscles (12) have shown that, during electrical stimulation, hyperactivity of the bladder was either diminished or completely abolished. This mechanism has been exploited as part of bladder training regimens. Clinically, bladder training has been shown to be effective in the management of urgency and urge incontinence (8, 13-15). When bladder training was compared with medical therapy (200 mg flavoxate hydrochloride plus 25 mg imipramine three times daily for 4 weeks), urge incontinence was completely controlled in 84% of patients in the BT group, with symptom relief in 76%, compared with 56% and 48%, respectively, in the medication group (16). A larger study (17), however, found that addition of behavioral interventions did not seem to offer any additional benefit to medical therapy (tolterodine) after 6 months. Urgency episodes and daytime frequency decreased 83% and 27.3%, respectively, in the tolterodine group, compared with 78.7% and 23%, respectively, in the combination group. Improvements in patients' perception of urinary symptoms were reported by 85.9% of the tolterodine group and 81.7% of the combination group. We found that all patients, a homogeneous group of female patients, regardless of treatment methodology, reported significant improvements in all voiding parameters examined, but that the addition of medication to BT resulted in additional improvements in urgency and frequency. Our finding of better outcomes in the Co than in the To group may be due to the bladder training program and design used here, which was a short-term intensive course of bladder drills. As many authors have emphasized, patients' understanding and compliance are the key factors in bladder training. Accordingly, the exercise program that we used did not simply require patients to contract and relax their pelvic floor muscles for certain periods of time. Instead, patients were given specific goals of 15 min each in sustaining contractions and were instructed how to increase these intervals. Having such a specific aim of what to accomplish with each exercise must have been enforcing and rewarding as well. In addition, repeated instructions, confirmation and reassurance were essential and constantly encouraged patients to stay on the right track for 12 weeks. Similar results have been reported (18), in which patients were provided with written information sheets describing the aims and endpoints of therapy as well as specific instructions on bladder drills. Although there were no formal bladder training sessions or additional follow-up for those in the bladder training-only group, the self-explanatory information sheets the patients kept would have reinforced and encouraged patients while on bladder drills. Thus, patients on medication plus bladder training reported significantly better results after therapy than did patients on medication alone. In the previous study, in which the drills were simplified, it may have facilitated patient adherence to treatment, but without any further reinforcements, the therapeutic benefits of BT may have been compromised (17). In this study, we found that patient adherence to treatment was lowest and discontinuation rate highest in the BT group, although the treatment efficacy results were similar at the end of 12 weeks. This may be attributed to the longer duration of patient maneuvering required before experiencing significant improvement in this BT-only group. Thus, despite many efforts by the physician to make patients stay on the correct treatment regimen, it was evidently easier for patients to take medication twice a day than to repeat exercises many times a day for months. We noted no adverse effects in the BT group, whereas, in the other 2 groups, the most common adverse effect was dry mouth. We found that 6.3% of patients in the BT group and 6.5% in the Co group had to discontinue medication due to severe dry mouth, percentages lower than those previously reported (19, 20). These results indicate that 4 mg/day tolterodine is safe and does not require dose adjustment. From these results, we conclude that bladder training, tolterodine and the combination of the two are all effective in controlling OAB symptoms, with comparable patient satisfaction. Since a short-term intensive course of bladder drill is as effective as medical or combined therapy in all voiding symptoms, but does not have any adverse effects, bladder training can be safely and effectively employed as first line treatment for OAB. In patients who have predominantly symptoms of frequency or urgency, tolterodine may be more effective, especially when combined with bladder training.\n\n**Question:** Compared to Tolterodine only (To) what was the result of Bladder training plus tolterodine (Co) on Micturition frequency?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** no significant difference\n"
}
] |
no significant difference
|
363
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Efficacy and Tolerability of Budesonide/Formoterol in One Hydrofluoroalkane Pressurized Metered-Dose Inhaler in Patients with Chronic Obstructive Pulmonary Disease\n\n ABSTRACT:\nBackground: Combination therapy with a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended in patients with chronic obstructive pulmonary disease (COPD) who have frequent exacerbations. The efficacy and tolerability of the combination of budesonide/formoterol have been demonstrated in patients with COPD when administered via the dry powder inhaler (DPI) in a 1-year study and when administered via the hydrofluoroalkane (HFA) pressurized metered-dose inhaler (pMDI) in a 6-month study. Objective: This study assessed the long-term efficacy and tolerability of budesonide/formoterol HFA pMDI in patients with moderate to very severe COPD. Methods: This was a 12-month, randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre study (NCT00206167) of 1964 patients aged ≥40 years with moderate to very severe COPD conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. After 2 weeks of treatment based on previous therapy (ICSs, short-acting bronchodilators allowed), patients received one of the following treatments twice daily: budesonide/formoterol pMDI 160/4.5 μg × two inhalations (320/9 μg); budesonide/formoterol pMDI 80/4.5 μg × two inhalations (160/9 μg); formoterol DPI 4.5 μg × two inhalations (9 μg); or placebo. Main outcome measures: The co-primary efficacy variables were pre-dose forced expiratory volume in 1 second (FEV1) and 1-hour post-dose FEV1. Results: Budesonide/formoterol 320/9 μg demonstrated greater improvements in pre-dose FEV1 versus formoterol (p = 0.008), and both budesonide/formoterol doses demonstrated greater improvements in 1-hour post-dose FEV1 versus placebo (p < 0.001). The rate of COPD exacerbations was lower in both budesonide/formoterol groups compared with formoterol and placebo (p ≤ 0.004). Both budesonide/formoterol doses were more effective than placebo (p ≤ 0.006) for controlling dyspnoea and improving health status (St George's Respiratory Questionnaire). All treatments were generally well tolerated. The incidence of pneumonia was not different for active (3.4–4.0%) and placebo (5.0%) groups. Conclusions: Budesonide/formoterol pMDI (320/9 μg and 160/9 μg) improved pulmonary function and reduced symptoms and exacerbations over 1 year in patients with moderate to very severe COPD. Only budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy for both co-primary variables compared with formoterol DPI 9 μg. Both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol and placebo.\n\nABSTRACT.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material is available for this article at 10.2165/00003495-200969050-00004 and is accessible for authorized users.\n\nBODY.BACKGROUND:\nIn patients with chronic obstructive pulmonary disease (COPD) and frequent exacerbations, the combination of a long-acting bronchodilator and an inhaled corticosteroid (ICS) is recommended.[1] Budesonide and formoterol delivered via a single dry powder inhaler (DPI; Symbicort® Turbuhaler®; AstraZeneca, Lund, Sweden) have been shown to improve pulmonary function[2,3] and reduce COPD exacerbation rates.[2,3] Fixed-combination budesonide/formoterol is available in the US in a hydrofluoroalkane pressurized metered-dose inhaler (pMDI; Symbicort® Inhalation Aerosol; AstraZeneca LP, Wilmington, DE, USA). While most patients with COPD can benefit from either a pMDIor DPI, the pMDI formulation allows patients whomay have inadequate peak inspiratory flow for optimal delivery from a DPI to use the pMDI effectively. A recent 6-month study demonstrated the efficacy and tolerability of budesonide/formoterol pMDI in patients with moderate to very severe COPD.[4] Treatment with budesonide/formoterol pMDI resulted in significant improvements in pulmonary function, dyspnoea and health-related quality of life; however, the duration of the study was limiting with respect to the analysis of low-frequency events, such as COPD exacerbations, and precluded an assessment of safety outcomes with long-term exposure.[4] In this study, the long-term efficacy and tolerability of two dosage strengths of budesonide/formoterol pMDI were assessed over 12 months in patients with moderate to very severe COPD.\n\nBODY.PATIENTS AND METHODS.PATIENTS:\nThe inclusion criteria were designed to select a population with moderate to very severe COPD with previous exacerbations (i.e. appropriate candidates for combination ICS/long-acting β2-adrenoceptor agonist [LABA] therapy): age ≥40 years, diagnosis of symptomatic COPD for >2 years, ≥10 pack-year smoking history, prebronchodilator forced expiratory volume in 1 second (FEV1) of ≤50% of predicted normal and prebronchodilator FEV1/forced vital capacity (FVC) of <70%. Patients were to have a Modified Medical Research Council dyspnoea scale score of ≥2 and a history of at least one COPD exacerbation requiring oral corticosteroids or antibacterials within 1–12 months before the first study visit. Additional enrolment criteria were the same as those in a similar 6-month study by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.STUDY DESIGN:\nThis was a randomized, double-blind, double-dummy, parallel-group, active- and placebo-controlled, multicentre trial (ClinicalTrials.gov identifier: NCT00206167) conducted from 2005 to 2007 at 237 sites in the US, Europe and Mexico. The study protocol was approved by a local institutional review board and ethics committee, and written informed consent was obtained from patients. The study was designed to conform with the Declaration of Helsinki, and was consistent with the International Conference on Harmonisation and Good Clinical Practice and applicable regulatory requirements. After meeting eligibility criteria, patients entered a 2-week run-in period, during which they received ICS monotherapy if previously stable on ICS (alone or in combination) and ipratropium bromide at a fixed dose if previously receiving anticholinergics. Albuterol (salbutamol) was permitted for rescue use throughout the study. At randomization, all previous ICSs were discontinued and patients were randomized to one of four treatments (figure 1). Patients who were treated with ipratropium bromide during the run-in period were allowed to continue on it throughout the study period. Other allowed and disallowed concomitant medications were the same as those reported in the study by Tashkin et al.[4]\nFig. 1Study design. To maintain blinding, patients received both a pressurized metered-dose inhaler (pMDI) and a dry powder inhaler (DPI) containing either active treatment or double-dummy placebo (PL) as appropriate. This study used formoterol (FM) DPI (Oxis® Turbuhaler®, AstraZeneca, Lund, Sweden) as the FM comparator because FM is not available as a hydrofluoroalkane pMDI in the US. A previous study in asthma patients reported equivalent FM-related bronchodilatory effects when FM was administered in combination with budesonide (BUD) via pMDI or alone via DPI.[5] Patients were asked to return to the clinic for follow-up visits 3–8 at the end of months 1, 2, 4, 6, 9 and 12, and received a telephone call 4 weeks after the last clinic visit. bid = twice daily; R = randomization.\n\nBODY.PATIENTS AND METHODS.EFFICACY EVALUATIONS:\nThe co-primary efficacy variables, measured at all clinic visits, were pre-dose FEV1 and 1-hour post-dose FEV1. Additional pulmonary function variables included pre-dose and 1-hour post-dose FVC measured at all clinic visits, and morning and evening peak expiratory flow (PEF) recorded daily in patient diaries. In a subset of patients (n = 491 of 1964; 25%) who agreed to undergo spirometry testing, 12-hour serial spirometry was performed and pre-dose and 1-hour post-dose inspiratory capacity (IC) were collected at randomization and months 6 and 12. FEV1 was measured pre-dose and at 5, 15, 30, 60, 120, 180, 240, 360, 480, 600 and 720 minutes post-dose. Baseline-adjusted average 12-hour FEV1 was calculated as the area between the 12-hour post-dose FEV1-over-time curve and the baseline pre-dose FEV1, divided by observation time. Spirometry was performed according to American Thoracic Society guidelines.[6] Crapo-predicted normals for FEV1 were used.[7] Secondary efficacy endpoints included COPD exacerbations, dyspnoea (separate and composite scores) and health status, all assessed as previously described.[4] A COPD exacerbation was defined as worsening of COPD requiring an oral corticosteroid or hospitalization.[4] Dyspnoea was assessed daily before the evening dose of study medication using the Breathlessness Diary, a validated single-item component of the Breathlessness Cough and Sputum Scale (BCSS).[8] Health status was assessed at months 1, 2, 6 and 12 using the St George's Respiratory Questionnaire (SGRQ).[9,10] Sleep score, percentage of awakening-free nights (sleep score of 0) and study rescue medication use were assessed as previously described by Tashkin et al.[4]\n\nBODY.PATIENTS AND METHODS.SAFETY EVALUATIONS:\nSafety was assessed by adverse event (AE) reporting. Pneumonia events were reported by physicians based on the Medical Dictionary for Regulatory Activities (version 10.0) pneumonia-related preferred terms (pneumonia, bronchopneumonia, lobar pneumonia or pneumonia staphylococcal). Vital signs and 12-lead ECGs were evaluated as previously described by Tashkin et al.[4] Subsets of patients were assessed for 24-hour urinary cortisol (n = 179), 24-hour Holter monitoring (n = 520), bone mineral density (BMD) at the hip and spine regions (n = 326), and ophthalmological assessments (n = 461), including intraocular pressure and progression of lenticular opacities. In all patients, blood and urine samples were collected, and comprehensive physical examinations were carried out at the time of screening and at the end of month 12. Blood and urine samples were analysed by a central laboratory (Quest Diagnostics Clinical Laboratories, Van Nuys, CA, USA). Vital signs, including heart rate and blood pressure, were measured at all study visits. Samples to assess 24-hour urinary cortisol levels were collected in a subset of patients at or before randomization and within 1 week before the 6- and 12-month visits, and analysed by Quest Diagnostics. 24-Hour Holter monitoring was performed at screening and months 1 and 4, BMD assessments (two sets of dual energy x-ray absorptiometry scans in the hip and spine regions) were performed at screening and the end of month 12, and ophthalmology assessments (intraocular pressure and lenticular opacities) were performed at screening and the end of months 6 and 12.\n\nBODY.PATIENTS AND METHODS.STATISTICAL ANALYSES:\nThe efficacy analysis set (i.e. intent-to-treat population) included all randomized patients who received at least one dose of randomized study medication and contributed sufficient data for at least one co-primary or secondary efficacy endpoint to be calculated during the randomized treatment period. The safety analysis population included all randomized patients who received at least one dose of randomized study medication and from whom any post-randomization data were available. For the subsets of patients who underwent serial spirometry, 24-hour urinary cortisol, BMD, 24-hour Holter monitoring and ophthalmological analyses, analysis sets were defined as those patients who received at least one dose of randomized study medication and had baseline and on-treatment values for the variable being assessed. Patients who discontinued prematurely completed final visit (month 12) assessments at the time of discontinuation, followed by the 4-week follow-up. A sample size of approximately 400 patients per treatment group was calculated to allow 90% power to detect a reduction in the number of COPD exacerbations of approximately 30%, adjusting for overdispersion of 2.3. This sample size ensured >95% power to detect a difference of 0.10L in FEV1, given an estimated standard deviation of 0.3 L. All tests were two-sided, with p ≤ 0.05 considered significant. The prespecified primary comparators were formoterol DPI for pre-dose FEV1 to demonstrate the contribution of budesonide and placebo for 1-hour post-dose FEV1. To address issues of multiplicity relating to multiple-dose comparisons and multiple variables of interest, a fixed-sequence testing procedure was adopted. The primary comparisons were prespecified as budesonide/formoterol pMDI 320/9 μg compared with (i) placebo for pre-dose FEV1 and 1-hour post-dose FEV1; and (ii) formoterol DPI 9 μg for pre-dose FEV1. If significant differences were obtained, comparisons continued with budesonide/formoterol pMDI 320/9 μg compared with placebo for (i) dyspnoea; (ii) SGRQ total score; and (iii) number of exacerbations. If significant differences were obtained for each of these key secondary variables, the lower dose of budesonide/formoterol pMDI was tested on the co-primary variables, as previously described, and, if significant differences were obtained, testing continued with number of exacerbations, dyspnoea and SGRQ total score compared with placebo. For all secondary efficacy variables, the primary comparison was budesonide/formoterol pMDI versus placebo. Primary and secondary variables were assessed as the change from baseline to the average over the randomized treatment period except for average 12-hour FEV1, mean FEV1 at 12 hours and SGRQ at end of treatment. Changes from baseline in the co-primary efficacy variables were analysed via analysis of co-variance (ANCOVA), adjusting for treatment, country and baseline value. The following three key secondary efficacy variables were identified: (i) dyspnoea; (ii) SGRQ total score; and (iii) exacerbations. Secondary variables were analysed similar to the co-primary efficacy variables; however, exacerbation rate (number/patient-treatment year) was analysed using a Poisson regression model, adjusted for country and differential randomization time, and time to first COPD exacerbation was described via Kaplan-Meier plot and analysed using the log-rank test. The number and percentage of patients experiencing clinically meaningful changes (based on a prespecified minimal important difference) in dyspnoea were analysed using the Cochran-Mantel-Haenszel test, adjusting for country. Geometric mean 24-hour urinary cortisol levels at end of treatment were compared between treatment groups using a multiplicative ANCOVA model. Mean changes from baseline to the average during the randomized treatment period in 12-lead ECG, and Holter variables and ophthalmology variables were analysed using a model similar to that used for the co-primary variables. BMD variables (natural logarithm of the analysis timepoint minus the natural logarithm of the respective baseline value for hip and spine) were analysed using an ANCOVA model adjusting for country, treatment and natural logarithm of the baseline value. Other safety data were summarized using descriptive statistics. No formal hypothesis testing of the safety data was performed.\n\nBODY.RESULTS.PATIENTS:\nOf 1964 randomized patients, 1355 completed the study (figure 2). Discontinuation was greater with placebo versus budesonide/formoterol and formoterol driven by withdrawal of consent. Time to discontinuation was significantly (p ≤ 0.004) prolonged in both budesonide/formoterol groups versus placebo. AE was the most common reason for study discontinuation, with a similar incidence across treatments (12.1–13.5%). Approximately 60% of patients had documented use of an ICS, either alone or in combination before entering the study. The percentage of patients who discontinued from the study in the formoterol and placebo groups was slightly higher in patients previously treated with ICS (34.9% and 38.0%, respectively) compared with those not previously receiving ICS before the study (26.8% and 33.9%, respectively). In contrast, patients in the budesonide/formoterol 320/9-μg group previously receiving ICS had a lower discontinuation rate compared with those not previously receiving ICS (23.1% vs 34.1%, respectively). In the budesonide/formoterol 160/9-μg group, the percentage of patients who discontinued the study was similar for those who were previously receiving an ICS (29.5%) and those who were not (28.2%).\nFig. 2Patient disposition. AE = adverse event; bid = twice daily; BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; med = medication; PL = placebo; pMDI = pressurized metered-dose inhaler. Most demographic and disease characteristics at baseline were similar across treatments (table I). However, in the serial spirometry subset (n = 491), mean percentage reversibility was greater in the budesonide/formoterol 320/9 μg (19.9%), budesonide/formoterol 160/9 μg (20.7%) and placebo (19.5%) groups versus formoterol (16.9%). COPD severity[1] was moderate in 17.8%, severe in 60.4% and very severe in 21.5% of patients. Common co-morbid conditions included hypertension (41.6%), lipid profile abnormalities (22.0%), cardiac disease (17.7%), diabetes mellitus (11.0%), osteoporosis (10.5%), cataracts (5.2%), atrial fibrillation/arrhythmia (4.3%) and congestive cardiac failure (2.9%).\nTable IPatient demographic and baseline clinical characteristics of randomized patients\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.CO-PRIMARY ASSESSMENTS:\nImprovements in pre-dose FEV1 were significantly greater for budesonide/formoterol 320/9 μg compared with formoterol (primary comparison; p = 0.008) and for both budesonide/formoterol dosages compared with placebo (p < 0.001) [figure 3a]. Although improvements in 1-hour post-dose FEV1 were significantly greater for both budesonide/formoterol dosages compared with placebo (primary comparison; p < 0.001), budesonide/formoterol 320/9 μg also resulted in significantly greater improvements compared with formoterol (p = 0.023) [figure 3b]. Improvements from baseline were apparent at the first assessment (pre-dose FEV1 at end of month 1; 1-hour post-dose FEV1 at day of randomization) and overall maintained over the 12-month treatment period for both budesonide/formoterol dosages (figure 3a and b).\nFig. 3Co-primary efficacy endpoints. Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose forced expiratory volume in 1 second (FEV1) and (b) 1-hour post-dose FEV1. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p ≤ 0.023 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.PULMONARY FUNCTION.SECONDARY ASSESSMENTS:\nMorning and evening PEF improved significantly for all active treatments compared with placebo (p ≤ 0.012) and for both budesonide/formoterol dosages compared with formoterol (p ≤ 0.017) [table II]. Results for pre-dose and 1-hour post-dose FVC are presented in the online supplement (see figures S1a and b in the supplementary material ['ArticlePlus'] at http://links.adisonline.com/DGZ/A5).\nTable IIMean (SD) changes from baseline in additional pulmonary function assessments In the serial spirometry subset, a ≥15% improvement in FEV1 was observed at 5 minutes after dose administration (first assessment) with all active treatments on the day of randomization (budesonide/formoterol 320/9 μg, 17.7%; budesonide/formoterol 160/9 μg, 20.3%; formoterol 9 μg, 16.5%) and at end of treatment (budesonide/formoterol 320/9 μg, 21.8%; budesonide/formoterol 160/9 μg, 22.4%; formoterol 9 μg, 15.0%) [figures 4a, b and c]. There was no evidence of a diminished effect at end of treatment over the 12-hour period in the budesonide/formoterol groups. Mean FEV1 at 12 hours and baseline-adjusted average 12-hour FEV1 were significantly improved with both budesonide/formoterol dosages compared with placebo on the day of randomization and at end of treatment (p ≤ 0.002) [table II]. Although both budesonide/formoterol dosages demonstrated significant improvements in mean FEV1 at 12 hours and in baseline-adjusted average 12-hour FEV1 compared with formoterol on the day of randomization (p ≤ 0.029), only budesonide/formoterol 320/9 μg demonstrated this effect at the end of treatment (p ≤ 0.004) [table II].\nFig. 4Mean percentage change from baseline in forced expiratory volume in 1 second (FEV1) over 12 hours at randomization and end of treatment (EOT) for (a) budesonide (BUD)/formoterol (FM) 320/9 μg twice daily (bid) vs placebo, (b) BUD/FM 160/9 μg bid vs placebo and (c) FM 9 μg bid vs placebo. DOR = day of randomization; DPI = dry powder inhaler; pMDI = pressurized metered-dose inhaler. No significant differences in pre-dose IC were observed among the treatment groups (figure 5a). Improvements from baseline to the average over the randomized treatment period in 1-hour post-dose IC were significantly greater with both budesonide/formoterol dosages compared with placebo (p < 0.001) and formoterol (p ≤ 0.018) [figure 5b]. Both dosages of budesonide/formoterol resulted in a 350 mL (≈25%) improvement from baseline to the average over the randomized treatment period in 1-hour post-dose IC, and formoterol resulted in a 210 mL (17%) improvement in 1-hour post-dose IC.\nFig. 5Least squares mean change from baseline by study visit over the randomized treatment period in (a) pre-dose inspiratory capacity (IC) and (b) 1-hour post-dose IC. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p < 0.001 vs placebo; † p = 0.01 vs FM; ‡ p < 0.05 vs FM; § p < 0.01 vs placebo.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) EXACERBATIONS:\nTime to first COPD exacerbation was significantly prolonged with both budesonide/formoterol dosages compared with placebo (p ≤ 0.004) and with budesonide/formoterol 320/9 μg compared with formoterol (p = 0.026) [figure 6]. In addition, significant reductions in the overall number of exacerbations per patient-treatment year were observed with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37% and 41%, respectively; p < 0.001) and formoterol (25% and 29%, respectively; p ≤ 0.004) [see figure S2 in the supplementary material]. These reductions were driven by exacerbations treated with oral corticosteroids, the rate of which was reduced with budesonide/formoterol 320/9 μg and 160/9 μg versus placebo (37.5% and 42.9%, respectively) and formoterol (24.1% and 30.6%, respectively) [p ≤ 0.006]. In addition, the percentage of patients in the budesonide/formoterol 320/9 μg, budesonide/formoterol 160/9 μg, formoterol and placebo groups who experienced an exacerbation was greater in those receiving ICS therapy either alone or in combination before the study (33.7%, 38.2%, 39.9% and 40.1%, respectively) compared with those not previously receiving ICS (25.8%, 24.9%, 29.4% and 32.8%, respectively).\nFig. 6Kaplan-Meier probability curve for the time to first chronic obstructive pulmonary disease exacerbation during randomized treatment. BUD = budesonide; DPI = dry powder inhaler; FM = formoterol; pMDI = pressurized metered-dose inhaler. * p ≤ 0.004 vs placebo; † p = 0.026 vs FM.\n\nBODY.RESULTS.EFFICACY EVALUATIONS.HEALTH STATUS AND COPD SYMPTOMS:\nImprovements in SGRQ total score were significantly greater for both budesonide/formoterol dosages compared with placebo (p ≤ 0.006) and for budesonide/formoterol 160/9 μg compared with formoterol (p = 0.006; table III). The improvements from baseline in SGRQ total score in the active treatment groups were not clinically meaningful versus placebo (i.e. decrease of ≥4 points). Significantly greater improvements in COPD symptom variables were observed for both budesonide/formoterol dosages versus placebo for all variables (BCSS, dyspnoea score, cough score, sputum score, sleep score, percentage of awakening-free nights and total daily rescue medication use) [p ≤ 0.003]. Compared with formoterol, mean improvements were significantly greater with budesonide/formoterol 320/9 μg (p ≤ 0.038) for all COPD symptom variables except sputum score and percentage of awakening-free nights, and with budesonide/formoterol 160/9 μg (p ≤ 0.047) for all COPD symptom variables except BCSS, dyspnoea and sputum score (table IV).\nTable IIIMean changes (SD) in SGRQ total and domain scores from baselinea to end of treatmentbTable IVMean (SD) changes in chronic obstructive pulmonary disease (COPD) symptom variables from baselinea to the average over the randomized treatment period Improvements (i.e. reductions) in dyspnoea scores were significantly greater for all active treatments versus placebo (p ≤ 0.003) and for budesonide/formoterol 320/9 μg versus formoterol (p = 0.032; table IV). All active treatment arms demonstrated ≥0.2 points (minimal important difference) change from baseline, but only budesonide/formoterol 320/9 μg had an increase of ≥0.2 points over placebo. A significantly (p < 0.001) greater percentage of patients in both budesonide/formoterol groups and in the formoterol group experienced clinically meaningful improvements in dyspnoea versus placebo (see figure S3 in the supplementary material).\n\nBODY.RESULTS.SAFETY EVALUATIONS:\nMean treatment exposure was lowest for placebo (270 days) compared with budesonide/formoterol 320/9 μg (305 days), budesonide/formoterol 160/9 μg (299 days) and formoterol (289 days). The most commonly reported AE (irrespective of causality) was COPD, which had a lower incidence with budesonide/formoterol 320/9 μg versus placebo and a slightly higher incidence with budesonide/formoterol 160/9 μg versus placebo (table V). AEs considered by the investigator to be related to study medication were generally similar among treatment groups with the most commonly reported being oral candidiasis, COPD and dysphonia (see table SI in the supplementary material).\nTable VOverall adverse events (AEs) [irrespective of relationship to study medication] reported by ≥3% of patients The total incidence of pneumonia-related AEs (pneumonia, bronchopneumonia, lobar pneumonia and pneumonia staphylococcal) was similar for budesonide/formoterol 320/9 μg and 160/9 μg (4.0% and 3.4%, respectively) compared with formoterol (3.4%) and placebo (5.0%) [see table SII in the supplementary material]. For potential lung infections other than pneumonia, the incidence was slightly higher in the active treatment groups versus placebo, which was driven largely by bronchitis (see table SII in the supplementary material). AEs typically or potentially associated with local and systemic effects of inhaled corticosteroids (local effects: aphonia, dysphonia, oral candidiasis and candidiasis; systemic effects: weight gain, adrenal suppression, ocular effects, skin effects, psychiatric disorder, diabetes control, thirst, taste effects and bone effects) were more frequent with budesonide/formoterol 320/9 μg (10.3% and 4.0%, respectively) and budesonide/formoterol 160/9 μg (5.7% and 4.5%, respectively) than formoterol (0.6% and 2.6%, respectively) and placebo (2.5% and 2.7%, respectively). The overall incidence of LABA class effects (i.e. tremor, palpitation, tachycardia, potassium changes, glucose changes, headache, agitation, anxiety, sleep effects and muscle cramp) was low, but higher among active treatment groups (budesonide/formoterol 320/9 μg [9.5%], budesonide/formoterol 160/9 μg [8.9%] and formoterol [6.5%]) versus placebo (4.8%). The incidence of cardiac-related AEs was higher for all active treatments (10.5–11.3%) versus placebo (6.9%). The most common cardiac-related AEs were hypertension and angina pectoris (budesonide/formoterol 320/9 μg [2.4% and 1.2%, respectively], budesonide/formoterol 160/9 μg [1.6% and 0.8%, respectively], formoterol [2.6% and 1.8%, respectively] and placebo [2.1% and 1.0%, respectively]). Although the number of atrial fibrillation AEs was low, all cases occurred within the active treatment groups. Discontinuations due to AEs (DAEs) occurred in 238 patients, with a similar incidence observed across treatment groups (range 11.3–12.5%) [see table SIII in the supplementary material]. The most common DAE was COPD, which was highest in the formoterol group (7.3%) and lowest in the budesonide/formoterol 320/9 μg group (4.0%); the incidence was similar in the budesonide/formoterol 160/9 μg (6.1%) and placebo (6.0%) groups (see table SIII in the supplementary material). The overall incidence of non-fatal serious adverse events (SAEs) was highest with formoterol (17.8%) compared with the other treatment groups and higher with budesonide/formoterol 320/9 μg (15.6%) and budesonide/formoterol 160/9 μg (13.6%) versus placebo (12.1%) [see table SIV in the supplementary material]. The most common non-fatal SAEs (occurring in five or more patients [all treatment groups combined]) were COPD (6.8%), pneumonia (1.5%), atrial fibrillation (0.5%), angina pectoris (0.3%), bronchitis (0.3%) and coronary artery disease (0.3%). These non-fatal SAEs occurred at a similar incidence in all treatment groups, except for COPD, which was slightly more common in the active treatment groups (budesonide/formoterol 320/9 μg [7.1%], budesonide/formoterol 160/9 μg [6.7%] and formoterol [7.9%]) versus placebo (5.6%). Fifteen patients died during the randomized treatment period with no imbalance across treatments: three in the budesonide/formoterol 320/9 μg group; six in the budesonide/formoterol 160/9 μg group; two in the formoterol group; and four in the placebo group. Fifteen patients died after the randomized treatment period (≥2 days after stopping study treatment) with no imbalance across treatments: five in the budesonide/formoterol 320/9 μg group; two in the budesonide/formoterol 160/9 μg group; four in the formoterol group; and four in the placebo group. None of the deaths were considered drug related by the investigator. Clinically significant changes in vital signs, 24-hour urinary cortisol, and 12-lead ECGs, QT interval and Holter assessments were rare. The geometric mean values of 24-hour urinary cortisol at 6 months and end of treatment were lower in both budesonide/formoterol dosage groups compared with formoterol and placebo (see table SV in the supplementary material). Significant differences in 24-hour urinary cortisol were observed between the budesonide/formoterol 320/9 μg and placebo groups (p = 0.035) at month 6 and between the budesonide/formoterol 320/9 μg and formoterol groups (p = 0.044) at end of treatment. Corrected QT (QTc) intervals ≥450 msec were experienced by similar percentages of patients across all treatment groups, whereas few patients experienced QTc intervals ≥500 msec or a QTc change ≥60 msec (see table SVI in the supplementary material). Data from ECG and Holter recordings showed a similar incidence of new onset atrial fibrillation in the active and placebo groups. BMD was stable over the 1-year study period in all treatment groups, with small but statistically significant differences in changes from baseline observed for budesonide/formoterol 320/9 μg compared with all other treatments for total lumbar spine BMD (p ≤ 0.037) and for budesonide/formoterol 320/9 μg versus formoterol for total hip BMD (p = 0.012) [see table SVII in the supplementary material]. The differences between the treatment groups in changes from baseline in BMD were of unclear clinical relevance, as the geometric least squares mean ratios for these treatment group comparisons were close to 1 (range 0.98–0.99). Minor increases in lenticular opacities and intraocular pressure were noted across all treatment groups in the ophthalmological analysis subset; the magnitude of these changes was similar among the treatment groups (see table SVIII in the supplementary material). Small but significant differences were observed for the change from baseline in posterior subcapsular score between the budesonide/formoterol groups (p = 0.022). Clinically significant changes in ophthalmological assessment were infrequent.\n\nBODY.DISCUSSION:\nIn this study, both budesonide/formoterol dosage strengths resulted in significant improvements from baseline over the randomized treatment period in pulmonary function, health status and COPD symptoms, as measured by dyspnoea, cough and sputum production, and a clinically significant reduction in COPD exacerbation rate compared with placebo. In addition, both budesonide/formoterol pMDI dosages were well tolerated relative to formoterol alone and placebo. The results of this 12-month study confirm those from the earlier related 6-month study by Tashkin et al.[4] and are consistent with those from studies of budesonide/formoterol DPI.[2,3] Taken together, the results demonstrate that this ICS/LABA combination provides benefits beyond formoterol alone in the treatment of COPD. The contribution of budesonide to improvements from baseline in pulmonary function and COPD symptoms was demonstrated by the greater efficacy of budesonide/formoterol 320/9 μg compared with formoterol alone for the co-primary variables of pre-dose FEV1 and 1-hour post-dose FEV1, and for dyspnoea and total BCSS scores, supporting use of the higher-dose combination for COPD symptoms. Both budesonide/formoterol dosage strengths demonstrated greater improvements from baseline compared with formoterol alone in morning and evening PEF, 12-hour FEV1, symptom scores and rescue medication use. In addition, the 25–29% reduction in exacerbation rate observed with both budesonide/formoterol dosage strengths beyond the substantial reduction achieved with formoterol alone further demonstrates the important contribution of budesonide to the combination product. The magnitude of the reduction in exacerbation rates reported in this study is similar to that reported previously for fixed-dose regimens of budesonide/formoterol 320/9 μg twice daily[2–4] and fluticasone propionate/salmeterol[11–13] relative to LABA alone. A key difference between this 12-month study and the 6-month study of similar design reported by Tashkin et al.[4] is that this study was powered to show a difference in exacerbations, while the 6-month study was not. This study demonstrated a significant reduction in exacerbation rates in patients treated with budesonide/formoterol 320/9 μg and 160/9 μg compared with those receiving formoterol or placebo. The fact that a greater percentage of patients in the formoterol and placebo groups who previously received ICS therapy, either alone or in combination, experienced exacerbations compared with those who did not previously receive ICS therapy may result in a skewing of the data; however, a similar response was observed in combination budesonide/formoterol patients with a greater percentage of patients who previously received ICS experiencing exacerbations compared with patients who were not previously treated with ICS therapy. Similarly, a greater percentage of patients who previously received ICS therapy discontinued in the formoterol and placebo groups than those who did not previously receive ICS therapy; however, this trend was reversed in the combination budesonide/formoterol groups. The long-term nature of this study allowed for a more thorough assessment of tolerability compared with the 6-month study.[4] No differences in pneumonia incidence were observed among treatment groups in this study or in the 6-month study, despite a shorter exposure time in the placebo group.[4] These results contrast with previous COPD studies that showed an increased incidence of pneumonia with fluticasone propionate/salmeterol treatment.[11–14] However, in this study and the 6-month study,[4] the incidence of potential lung infections other than pneumonia was slightly higher in all active treatments compared with placebo; these differences were largely driven by an increased incidence of bronchitis. The incidence of SAEs, including COPD SAEs, was slightly higher in both budesonide/formoterol groups compared with placebo. This finding may be attributed to the earlier discontinuation in the placebo group, which resulted in a shorter study drug exposure (35 and 29 fewer days vs budesonide/formoterol 320/9 μg and 160/9 μg, respectively). Although the overall incidence of atrial fibrillation, a known LABA class effect, was small (1.1%), an imbalance was noted between the formoterol-containing arms and placebo. However, no differences in new-onset atrial fibrillation were apparent among groups based on 24-hour Holter monitoring or ECG data collected at repeated visits during the randomized treatment period. Overall, the safety profile of budesonide/formoterol pMDI in this 12-month study was similar to that reported in the 6-month study by Tashkin et al.,[4] suggesting no increase in safety concerns with a longer duration of treatment.\n\nBODY.CONCLUSIONS:\nBoth dosage strengths of budesonide/formoterol pMDI (320/9 μg and 160/9 μg) were effective and well tolerated over 1 year of treatment in this population of patients with moderate to very severe COPD. In addition, budesonide/formoterol pMDI 320/9 μg demonstrated greater efficacy compared with formoterol for pre-dose FEV1, suggesting that the higher dosage strength containing twice the daily dose of budesonide is appropriate in this patient population.\n\nBODY.ELECTRONIC SUPPLEMENTARY MATERIAL:\nSupplementary material, approximately 247 KB.\n\n**Question:** Compared to Placebo what was the result of Budesonide/formoterol pMDI 320/9 μg on Cough score improvement?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly increased\n"
}
] |
significantly increased
|
452
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: Concomitant therapy with Cineole (Eucalyptole) reduces exacerbations in COPD: A placebo-controlled double-blind trial\n\n ABSTRACT.BACKGROUND:\nThe clinical effects of mucolytics in patients with chronic obstructive pulmonary disease (COPD) are discussed controversially. Cineole is the main constituent of eucalyptus oil and mainly used in inflammatory airway diseases as a mucolytic agent. We hypothesised that its known mucolytic, bronchodilating and anti-inflammatory effects as concomitant therapy would reduce the exacerbation rate and show benefits on pulmonary function tests as well as quality of life in patients with COPD.\n\nABSTRACT.METHODS:\nIn this double-blind, placebo-controlled multi-center-study we randomly assigned 242 patients with stable COPD to receive 200 mg of cineole or placebo 3 times daily as concomitant therapy for 6 months during winter-time. The frequency, duration and severity of exacerbations were combined as primary outcome measures for testing as multiple criteria. Secondary outcome measures included changes of lung function, respiratory symptoms and quality of life as well as the single parameters of the exacerbations.\n\nABSTRACT.RESULTS:\nBaseline demographics, lung function and standard medication of both groups were comparable. During the treatment period of 6 months the multiple criteria frequency, severity and duration of exacerbations were significantly lower in the group treated with cineole in comparison to placebo. Secondary outcome measures validated these findings. Improvement of lung function, dyspnea and quality of life as multiple criteria were statistically significant relative to placebo. Adverse events were comparable in both groups.\n\nABSTRACT.CONCLUSION:\nConcomitant therapy with cineole reduces exacerbations as well as dyspnea and improves lung function and health status. This study further suggests cineole as an active controller of airway inflammation in COPD by intervening in the pathophysiology of airway inflammation of the mucus membrane.\n\nABSTRACT.TRIAL REGISTRATION:\nISRCTN07600011\n\nBODY.INTRODUCTION:\nChronic obstructive pulmonary disease (COPD) is considered to be a multi-component disease comprising structural and functional changes inside and outside the lungs. Effective medications for COPD are available and can reduce or prevent symptoms, increase exercise capacity, reduce the number and severity of exacerbations and improve health status. In common clinical use are bronchodilators as β-agonists, anticholinergic drugs and methylxanthines as well as glucocorticosteroids. The clinical effectiveness of these drugs has been shown in many controlled clinical studies [1-7]. Airway inflammation and mucociliary dysfunction in COPD patients have direct clinical consequences on the decline of lung function. As a consequence of cigarette smoking the ciliated epithelium is damaged and the mucus membrane becomes inflamed, resulting in decreased mucociliary transport leading to an accumulation of mucus within the airway so that the likelihood of recurrent respiratory infection is increased. Cineole has positive effects on the beat frequency of the cilias in the mucus membrane and has bronchodilating and anti-inflammatory effects. Therefore, it is appropriate to postulate that cineole will show positive influence on the exacerbations as well as on the lung function in COPD patients – even as concomitant therapy [8-13]. We conducted a randomised, placebo-controlled multi-center trial with the concomitant prescription of cineole – the main constituent of eucalyptus oil – in patients with stable COPD. The primary hypothesis was that cineole would decrease the number, severity and duration of exacerbations. Secondary outcome measures were lung function, severity of dyspnea and quality of life as well as relevant adverse effects.\n\nBODY.MATERIALS AND METHODS.ENROLMENT OF PARTICIPANTS:\nParticipants were recruited in the offices of 4 general practitioners and 7 specialists in pneumology in Germany. The study was carried out during the winter seasons 2003/2004 and 2004/2005 over a treatment period of 6 months in the winter and starting the enrolment in September at the earliest. The participants were 40 to 80 years of age and had airflow limitation with FEV1 of less than 70% and more than 30% of the predicted value (moderate to severe COPD; according to GOLD classification stages 2 and 3 of COPD) [14]. Patients with an increase of more than 15% and more than 200 ml in FEV1 after inhalation of β-agonists (at least 200 μg Salbutamol or equivalent) were excluded according to the definition of COPD of the German Airway-League. [15]. All patients were current smokers or ex-smokers with at least 10 pack years. Patients were excluded if they had severe medical conditions such as bronchial carcinoma, myocardial infarction, alcoholism, heart failure. All randomised patients provided written informed consent and the protocol was approved by the local Ethics Committees at each of the 11 participating centres.\n\nBODY.MATERIALS AND METHODS.TREATMENT GROUPS:\n242 patients were randomly assigned to one of the two treatment groups with stratification according to the clinical centres. All patients were given the necessary dose of capsules each containing 100 mg cineole or no active ingredient. For each group 2 capsules 3 times daily were prescribed resulting in a dose of 600 mg per day for the cineole or no cineole for the placebo group as concomitant therapy. Patients were instructed to take the capsules half an hour before meal so that they could not recognize the smell of cineole. Capsules with active substance and placebo looked absolutely identical and were sealed in blister stripes. The diagnosis of COPD was confirmed according to the current guidelines of \"Global Initiative for Chronic Obstructive Lung Disease\" (GOLD). Frequency, duration, severity and symptoms of exacerbations were defined according to the literature [16-18]. An exacerbation was documented when the duration was more than 3 days or a complex of at least 2 respiratory adverse events with a duration of more than 3 days occurred. Exacerbation severity was defined as: mild (Score = 1, increased need for basic medication of COPD which the individual can manage in its own normal environment), moderate (score = 2, increased need for medication and he/she feels the need to seek additional medical assistance) and severe (score = 3, patient recognise obvious and/or rapid deterioration in conditions requiring hospitalisation). Details for the number, duration and severity as well as treatment and symptoms of exacerbations were recorded in the patient's diary for each day. Since the most relevant differentiation for exacerbations are frequency, duration and severity, the multiple criteria were combined as primary outcome measures for the statistical evaluation. Secondary outcome measures were the single parameters of the exacerbation as well as lung function, symptoms and quality of life. Spirometric measurements were carried out before the beginning of the study determining reversibility of the airflow limitation by inhalation of short acting β2-agonists to assure that the reversibility of lung function was less than 200 ml or 15%. Spirometric measurement included determination of forced expiratory volume in 1 second (FEV1), forced vital capacity (FVC) and vital capacity (VC) at the beginning and after 3 and 6 months. Additionally, symptoms score were determined for dyspnea (scores: 0 = caused no problems, 1 = caused occasionally problems, 2 = caused a lot of problems, 3 = the most important problem the patient had), weekly frequency of dyspnea (scores: 0 = no day was good, 1 = 1–2 days were good, 2 = 3–4 days were good, 3 = nearly every day was good, 4 = every day was good), general conditions (scores: 0 = good, 1 = impaired, 2 = bad, 3 = very bad, 4 = unbearable), cough (scores: 0 = never, 1 = rarely, 2 = occasionally, 3 = often, 4 = very often, 5 = nearly continuously). Diagnosis-related quality of life was determined according to the \"St. George's Respiratory Questionnaire\" [19].\n\nBODY.MATERIALS AND METHODS.VISITS AND RANDOMIZATION:\nBefore randomisation we ascertained the patients' eligibility and conducted spirometry. After the randomisation the following parameters were recorded: height, weight, age, time since first symptoms for the diagnosis of COPD, documentation of allergies, concomitant disease, smoking habits (documentation of pack years), number of exacerbations in the year before during winter time, determination of quality of life and current maintenance therapy. The following control visits were carried out after 1, 2, 3, 4, 5 and 6 months recording exacerbations since last visit, frequency of dyspnea, characterisation of dyspnea, hypersecretion and cough as well as adverse events, compliance and change of therapy. Spirometry was carried out at the beginning of the study as well as after 3 and 6 months of treatment. Quality of life was determined at the beginning and at the end of the study.\n\nBODY.MATERIALS AND METHODS.STATISTICAL ANALYSIS:\nThe proposed sample size for the present trial was 240 patients for both treatment groups. The sample size was chosen to detect a minimum difference of 15% of exacerbations after 6 months of treatment. Analysis of efficacy was performed with the intention-to-treat-population including all eligible patients who received at least one dose of medication and had at least one follow-up visit. The number of all exacerbations, duration of exacerbations, degree of severity of exacerbation recorded in patients diary during the 6 months treatment period were summarised and the sums compared according to Wei-Lachin's directional test of multiple criteria (equally weighted) [20]. Additionally, single parameters characterising the exacerbations and dyspnea were analyzed exploratory as secondary outcome measures at multiple endpoints according to Wei-Lachin validating the sum-formation. The Wilcoxon-Mann-Whitney-U Test was used for all other secondary outcome measures. Data are expressed as mean values (with SD) and all tests were two-tailed. P-values of 0.05 or less were considered to indicate statistical significance.\n\nBODY.RESULTS:\nA total of 242 patients were randomised and received at least one dose of study medication. 22 patients were excluded from the statistical analysis of efficacy because they did not meet the requirements of the GOLD guidelines since FEV1/VC was > 0.7. 220 patients were eligible according to the GOLD guidelines having COPD of stage II and III. The two treatment groups were well matched with respect to baseline characteristics (table 1). The mean age of the participants at entry was 62 years in both groups. The mean duration of COPD of 13 years as well as 31 pack-years and the basic medication (i.e. ICS, β-agonists, anticholinergics and theophylline) were balanced between the two groups (table 2). Medication was not changed during the treatment period except in occurrence of exacerbations. The baseline lung function and reversibility in both groups were comparable. Treatment compliance was determined by counting the study medication at each visit and was found high and comparable across the treatment groups. Table 1 Base Line Characteristics of the Patients* CHARACTERISTIC PLACEBO (N = 110) CINEOLE (N = 110) Age – yr Mean 62.5 ± 10.2 62.2 ± 9.1 Range 40 – 79 41 – 79 Sex – M/F 75/35 66/44 Weight – kg Mean 79 ± 16 79 ± 14 Range 48 – 120 52 – 113 Height – m Mean 1.71 ± 8.8 1.70 ± 8.0 Range 1.48 – 1.90 1.54 – 1.92 Years since appearance of COPD Mean 12.8 ± 9.7 13.6 ± 10.9 Range 1 – 56 1 – 59 Severity of COPD [number of patients] Moderate COPD (II) 69 68 Severe COPD (III) 41 42 FEV 1 /VC [%] 58.0 58.7 FEV 1 [l] 1.61 ± 0.5 1.62 ± 0.5 FEV 1 % predicted value 54.4 54.9 Reversibility (increase of FEV 1 after inhalation of β-agonist) [%] 5.5 4.5 Allergies (number) 10 9 Pack years 31.5 ± 19 31.3 ± 16 * Plus – minus values are means ± SD. Table 2 Constant concomitant therapy THERAPY PLACEBO (N = 110) CINEOLE (N = 110) Inhaled β-agonists* (LABA and SABA) 83 88 Inhaled anticholinergics 29 35 Inhaled corticosteroids (ICS) 27 25 Theophylline 38 32 * including combinations \n\nBODY.RESULTS.PRIMARY OUTCOME MEASURES.EXACERBATIONS:\nAt baseline the mean exacerbation rate was 3.2 in both groups during the previous year. The number of patients with exacerbations during the treatment period in the cineole group was 31 patients (28.2%) and 50 patients (45.5%) in the placebo group. As primary outcome measure the sum of exacerbations for frequency, duration and severity at all 6 following visits as composite endpoint (equally weighted) were calculated according to the Wei-Lachin Test procedure for multiple criteria and showed a statistically significant difference for the primary outcome measure between both treatment groups (p = 0.0120) Table 3. Calculating these single parameters alone exploratory according to Mann-Whitney-U it could be proven that they were statistically significant too (i.e. for frequency 0.0069 an, duration 0.0210 and for severity 0.0240). Validating these results by Wei-Lachin-Test procedure for multiple endpoints for the number, the degree and the severity of exacerbations, the degree during the 6-month treatment the differences between both treatment groups were statistically significant (p = 0.0016, 0.0031 and 0.0025) which underlines higher sensitivity of this test-procedure. Medication of the exacerbations with additionally applied corticosteroids occurred in 17 cases in the cineole and in 25 cases in the placebo group which was not statistically significant different. Table 3 Mean of sum of number, duration and severity of exacerbations during 6 months of treatment with cineole or placebo* PLACEBO CINEOLE SCORE SCORE P VALUE† Sum of exacerbations (number)# 0.9 ± 1.46 0.4 ± 0.82 0.0069 Sum of duration (days)# 5.7 ± 8.9 4.0 ± 10.9 0.0210 Sum of severity (score)# 1.4 ± 2.2 0.8 ± 1.5 0.0242 Summarized parameter (directional test) 0.0120 * Plus – minus values are means ± SD. † P-Values are for the comparison between the two groups. # The sum of the exacerbation parameters is calculated by addition of the documented parameters at all visits. \n\nBODY.RESULTS.SECONDARY OUTCOME MEASURES.LUNG FUNCTION:\nPatients only discontinued inhaled β2-agonist prior to spirometry testing. After inhalation of a β2-agonist the reversibility of lung obstruction (increase of FEV1) at the start of the study was 4.5% in the cineole group and 5.5% in the placebo group (table 1). After 6 months of treatment the mean FEV1 increased by 78 ml (4.7%) in the cineole group (table 4). The mean differences between both groups were not statistically significant (p = 0.0627). After 6 months of treatment an increase of FVC by 62 ml (2.7%) after cineole therapy and a decline of 25 ml (1.1%) after treatment with placebo was determined. The difference concerning change of FVC and VC between both treatment groups was not clinically relevant. Table 4 Secondary outcome measures during 6 months Of treatment with cineole or placebo for change of lung function and dyspnea symptoms * LUNG FUNCTION AND SYMPTOMS PLACEBO CINEOLE BASE LINE 3 MONTHS 6 MONTHS BASE LINE 3 MONTHS 6 MONTHS p-Value FEV 1 [l] 1.61 ± 0.5 1.62 ± 0.5 1.61 ± 0.5 1.62 ± 0.5 1.67 ± 0.5 1.70 ± 0.6 0.0627 † FVC [l] 2.23 ± 0.8 2.25 ± 0.8 2.22 ± 0.7 2.33 ± 0.8 2.36 ± 0.9 2.36 ± 0.9 0.2409 † VC [l] 2.81 ± 0.8 2.71 ± 0.7 2.68 ± 0.8 2.80 ± 0.8 2.73 ± 0.8 2.72 ± 0.9 0.2060 † Trouble in breathing # 1.8 ± 0.9 2.1 ± 0.9 2.2 ± 1.0 1.9 ± 0.9 2.4 ± 1.0 2.5 ± 1.1 0.0103 & Dyspnea in the morning $ 1.1 ± 0.7 0.9 ± 0.7 0.7 ± 0.7 1.1 ± 0.8 0.7 ± 0.7 0.5 ± 0.6 0.0466 & Dyspnea at rest $ 0.7 ± 0.7 0.4 ± 0.6 0.4 ± 0.6 0.6 ± 0.6 0.3 ± 0.5 0.3 ± 0.5 0.0156 & Dyspnea during exercise $ 2.0 ± 0.6 1.8 ± 0.7 1.7 ± 0.8 2.0 ± 0.6 1.7 ± 0.7 1.5 ± 0.9 0.1252 & * Plus – minus values are means ± SD. Higher scores on the symptoms-sum-score indicate more disease activity. † P-Values are for the comparison of the changes from base line to 6 months between the two groups. & P-Values for the comparison are calculated by the multiple criteria calculation of 6 visits by Wei-Lachin between the two groups. # Scores: 0 = no day was good, 1 = 1–3 days were good, 2 = nearly every day was good, 3 = every day was good in a week. $ Scores: 0 = did not cause any problems, 1 = sometimes caused problems, 2 = caused a lot of problems, 3 = the most important problem the patient had \n\nBODY.RESULTS.SECONDARY OUTCOME MEASURES.DYSPNEA:\nThe differences between both groups after 6 months of treatment are summarised in table 4. The baseline dyspnea scores in the morning, trouble in breathing, dyspnea at rest and dyspnea during exercise were similar in both groups, indicating a moderate level of a dyspnea for most patients at the beginning of the treatment period. Calculating the values at all 6 visits at multiple endpoints the difference between both treatment groups were statistically significant for trouble in breathing, dyspnea in the morning and dyspnea at rest. Dyspnea during exercise did not show a statistically significant difference between treatment groups.\n\nBODY.RESULTS.SECONDARY OUTCOME MEASURES.QUALITY OF LIFE:\nAt 6 months the mean improvement of SGRQ total symptom score was -9.1 after treatment with cineole and -4.1 after treatment with placebo (table 5). The difference between treatment groups was not statistically significant (p = 0.0630). The improvement of the symptom score was statistically significant (p = 0.0224). The differences of changes of activity score and impact score were not statistically significant between the two groups. Table 5 Secondary outcome measures for saint george's respiratory questionnaire (sgrq): Change of total symptom score, symptom score, activity score and impact score During 6 months of treatment with cineole or placebo* SGRQ SCORES PLACEBO CINEOLE P VALUE † BASE LINE 6 MONTHS BASE LINE 6 MONTHS Symptom score 57.4 ± 20.2 48.5 ± 24.9 57.3 ± 20.4 43.8 ± 24.3 0.0224 Activity score 53.4 ± 21.9 50.0 ± 24.8 52.1 ± 20.5 43.5 ± 22.4 0.2032 Impact score 37.2 ± 20.9 33.9 ± 23.3 35.8 ± 20.2 27.4 ± 19.2 0.1126 TOTAL SYMPTOM SCORE 45.6 ± 18.9 41.3 ± 22.5 44.4 ± 17.8 34.5 ± 18.9 0.0630 * Plus – minus values are means ± SD. Lower scores on the scores indicate higher quality of life. † P-Values are for the comparison of the changes in scores from base line to 6 months treatment between the two groups. \n\nBODY.RESULTS.MULTIPLE CRITERIA:\nSymptomatic of COPD patients is characterized by lung function, dyspnea and quality of life. In order to include these relevant secondary outcome measures together we used these parameters equally weighted as multiple criteria. The increase of FEV1, amelioration of dyspnea and improvement of total score of \"SGRQ\" were calculated according to the Wei-Lachin Test procedure for multiple criteria and showed a statistically significant difference for the relevant secondary outcome measure between both treatment groups (p = 0.0024).\n\nBODY.RESULTS.MULTIPLE CRITERIA.OTHER FINDINGS:\nConcomitant therapy with β-agonists and anticholinergics, corticosteroids or combinations and methylxanthines in both groups were comparable (table 2). The global assessment of efficacy showed a significant difference, which correlated with the amelioration of clinical findings after treatment with cineole.\n\nBODY.RESULTS.SIDE EFFECTS:\nAll patients receiving the study medication (including those with FEV1/VC > 0.7) were included in the safety examination. During treatment side effects were seen in 22 patients whereas in 17 cases adverse events were not related to the study medication. In the placebo group 11 adverse events were estimated not being related to the study medication whereas 2 cases were interpreted as being related to the study medication (heartburn). During treatment with cineole 9 cases of adverse events were reported whereas 6 adverse events were reported not being related to the study medication. In 3 patients (nausea, diarrhoea, heartburn) the adverse events were estimated being related to the study medication. The difference between the two treatment groups was neither clinically relevant nor statistically significant. Safety examinations of the global assessment showed no difference between the two treatment groups. During the 6 months of treatment compliance was good in all patients.\n\nBODY.DISCUSSION:\nPatients with COPD experience exertional breathlessness caused by bronchoconstriction, mucous secretion, edema of the airway wall and loss of attachments to the terminal airways [14]. Hence pharmacological therapy has focused on the treatment of airway obstruction and inflammation to improve symptoms primarily dyspnea as well as health status. Bronchodilators are the mainstay of pharmacotherapy for patients with COPD. On the other hand it is well known that mucociliary dysfunction has direct clinical implications. Mucus is beneficial in normal quantities but in case of mucus hypersecretion when cilia fail, the mucus pool allows bacterial colonisation. The presence of pooled bacteria results in the release of bacterially-derived toxins that destroy the underlying epithelium and trigger a neutrophilic response [21]. Taking into account the known pharmacological effects of the defined natural product cineole it was assumed that this compound might be beneficial for patients with COPD. Exacerbations have been shown to be reduced in various studies evaluating treatment with inhaled β-agonists or corticosteroids or combinations. The major finding of the present study is that cineole provides a significantly greater reduction of frequency, duration and severity of exacerbations than placebo. The exacerbations were analyzed as multiple criteria for the relevant specifications frequency, duration and severity. The result for the primary outcome measure could be validated exploratory for the single parameters showing a statistical significant difference too. Therefore, both testing procedures are valid whereas the testing multiple criteria is more sensitive. This proof of efficacy is an important contribution to the known pharmacological properties of cineole which therefore is not a mucolytic agent only. The result of this study suggests important new evidence of superior therapeutic efficacy of additional therapy with cineole to better control COPD exacerbations compared to the currently recommended combined therapy with ICS and LABA. Furthermore, additional therapy with cineole may positively interact with anti-inflammatory activity of recommended airway therapies in COPD and may serve to protect airways from other environmental agents. In general, quality of life deteriorates slowly in patients with COPD. During the period of 6 months treatment of this study we observed a decrease of the scores of SGRQ in both treatment groups. The reason for this finding in the placebo group, too, seems to be due to patients receiving better medical attention when involved in clinical trials. The higher rate of exacerbations in the winter before the study began in both treatment groups is due to the same reason. Our present data with cineole therapy underline a greater improvement than after treatment with placebo. Differences in change of FEV1 were not statistically significant (p = 0.0627). These findings correlate with a decline of FEV1 in the placebo group of 0.4% and an increase of 4.8% in the cineole group. This value is nearly identical with the increase of FEV1 after the inhalation of β-agonists, when testing the reversibility, before concomitant therapy with cineole started for six months.\n\nBODY.CONCLUSION:\nThese collective findings underline that cineole not only reduces exacerbation rate but also provides clinical benefits as manifested by improved airflow obstruction, reduced severity of dyspnea and improvement of health status. Therefore, cineole can provide a useful treatment option for symptomatic patients with COPD in addition to treatment according to the guidelines. These results have to be seen in context with socio-economic aspects. As COPD is an extremely costly disease and a cause of major financial and social burden concomitant therapy with cineole can be recommended, especially due to the lack of relevant side effects and relatively low cost. The results of our study provide good evidence that cineole will show benefits as additional therapeutic regimen in patients with COPD. These findings correspond to the interpretation of the efficacy-study with Carbocysteine but not with Acetylcysteine, because this medication did not show a significant reduction of exacerbations [22,23].\n\nBODY.COMPETING INTERESTS:\nThe authors declare that they have no competing interests.\n\nBODY.AUTHORS' CONTRIBUTIONS:\nThe study was designed and the protocol developed by HW, C S and UD. CS worked as principal investigator. Statistical analysis was carried out by UD. The results were interpreted by HW, CS and UD. All authors gave substantial critical input in revising the manuscript.\n\n**Question:** Compared to placebo what was the result of cineole on Mean of sum of number, duration and severity of exacerbations during 6 months of treatment?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
455
|
[
{
"role": "user",
"content": "You are an expert medical assistant specialized in analyzing clinical trial conclusions. Your task is based on an article context and a Question created from a ICO (outcome, intervention, comparator) triplet, to arrive at the correct conclusion regarding their relation. \n\nYou will be given an article context preceded by **Article:**, which contains the Abstract and Body sections of a randomized controlled trial. **Question:** will denote the information present on the ICO triplet. Pay close attention to the details in the article to accurately assess the conclusion from the trial.\n\n**Article:** \n\nTITLE: The Effect of Clofibrate on Decreasing Serum Bilirubin in Healthy Term Neonates under Home Phototherapy\n\n ABSTRACT.OBJECTIVE:\nThis study was designed to determine the effect of clofibrate on neonatal uncomplicated jaundice treated with home phototherapy.\n\nABSTRACT.METHODS:\nThis clinical trial study was performed on 60 newborns with jaundice that received home phototherapy. Inclusion criteria were body weight between 2500 to 4000 gr, breastfed, total serum bilirubin (TSB) between 14 to 20 mg/dl, aged over 72 hours. The neonates were randomly divided into two groups. All received home phototherapy. Group I received a single dose of 50 mg/kg clofibrate and the other group served as control group. Total serum bilirubin level was measured every 24 hours.\n\nABSTRACT.FINDINGS:\nTwo groups were matched regarding weight, sex, age and first TSB. At 24 and 48 hours of treatment, the mean values of TSB in the clofibrate group were 13.72 (1.56), 9.5 (0.56) and in the control group 15.30 (1.44), 12.6 (1.44). The results show that TSB was significantly decreased after 24 and 48 hours in clofibrate group (P<0.001). The mean duration of phototherapy in group I was 72(0.0) hours and in the control group 76.80 (±9.76) hours. The duration of phototherapy was significantly shorter in clofibrate group (P<0.001).\n\nABSTRACT.CONCLUSION:\nClofibrate is effective for outpatients with neonatal hyperbilirubinemia who are under home phototherapy. Of course, further studies are needed for approved routine use of this drug in the treatment of neonatal jaundice.\n\nBODY.INTRODUCTION:\nIn the first day of life, bilirubin production is increased to an estimated average of 8 to 10 mg/kg of body weight per day, an amount about two or three times greater than that of adults[1]. Approximately two thirds neonates become clinically jaundiced. Phototherapy is the most widely used form of therapy for the treatment and prophylaxis of neonatal hyperbilirubinemia[2]. Several potential complications may occur with the use of phototherapy[3], such as retinal degeneration, fluid imbalance, and bronze baby syndrome. Some reports have demonstrated that home phototherapy may be an effective and safe alternative to prolonged hospitalization for healthy full-term neonates with jaundice. Clear advantages of home-centered phototherapy include: (1) reduced coast; (2) avoidance of parent-infant separation; and (3) parental satisfaction[4]. Although some pharmacological agents such as activated charcoal[5] or agar[6] are suggested to treat neonatal jaundice, further study of this type of therapy is needed before to their routine clinical use. Glucuronyl transferase activity can be increased with administration of phenolbarbital[7] or clofibrate[8]. Clofibrate decreases serum cholesterol and triglyceride levels in adults[9]. This drug has been proposed for treatment of neonatal hyperbilirubinemia[10]. Mohammadzadeh et al studied the effect of clofibrate on reducing serum bilirubin of neonates beyond the first week of life[11]. Badeli and colleagues determined the effect of clofibrate on uncomplicated hyperbilirubinemia of neonates during the first week of life[12]. Mohammadzadeh and badeli studied the effect of 100mg/kg clofibrate. The aim of the present study was determination the effect of 50 mg/kg of clofibrate on tsb of hyperbilirubinemia of neonates under home phototherapy.\n\nBODY.SUBJECTS AND METHODS:\nFrom April 4, 2007 to June 20, 2008 60 neonates with uncomplicated hyperbilirubinemia who were under home phototherapy in rasht, Iran, entered our study. The ethics committee of our university approved the study. Informed consent of the parents was obtained. Inclusion criteria were body weight between 2500–4000 grams with gestational age 38 to 41 weeks, breastfed, and having total serum bilirubin (TSB) between 14 to 20 mg/dl, with postnatal age above 72 hours. Infants with abo or rh incompatibility, g6pd deficiency, and conjugated hyperbilirubinemia or any concomitant disease were excluded. Portable phototherapy units consist of 4 special blue tubes placed 40 cm above the infant as home phototherapy. Patients were examined and a sample of serum bilirubin was taken daily. The neonates (n=60) were randomly divided into two groups and they were matched regarding weight, sex, age and value of the first TSB. 30 neonates were allocated to clofibrate group (group I) and 30 neonates to control group (group II). All neonates in the two groups received home phototherapy. The control group received no placebo. Clofibrate group (group I) was given a single dose of 50 mg/kg clofibrate before starting phototherapy. In the first day, laboratory tests included estimation of total bilirubin (direct and indirect), blood group including rh of the mother and neonate, complete blood count (cbc), g6pD activity, coombs test, reticulocyte count. Direct and indirect, as well as tsb were measured every 24 hours by the same laboratory till tsb dropped under 10 mg/dl. The duration of phototherapy was recorded with a timer. Data were analyzed using spss 10; t-test and ANOVA were used for analyzing the data. P-value less than 0.05 was considered significant.\n\nBODY.FINDINGS:\nAmong 60 neonates, 30 infants consisting of 18 (60%) females and 12 (40%) males belonged to group I (clofibrate) and 17 (56.7%) females and 13 (43.3%) males to group II (control). There were no statistically significant differences between the two groups regarding weight, age and first tsb value (Table 1). TBS values show significant difference between the two groups 24 and 48 hours after starting phototherapy at home. Group I showed lower value (Table 2). Mean (±SD) phototherapy time in group I was 72(±0.0) hours and in group II 76.80 (±9.76) hours.\n Table 1 Mean (±SD) age, weight and first TSB- value in the two groups Parameter Group I Group II Mean (±SD ‡ ) Mean (±SD) \n Age (day) \n 6.80 (3.24) 6.6 (2.50) \n Weight (g) \n 3107 (394.89) 3150 (469.40) \n TSB * (mg/dl) \n 17.24 (1.48) 17.42 (1.44) ‡ SD: standard deviation * TSB: total serum bilirubin Table 2 Comparison of mean total serum bilirubin (mg/dl) after 24 and 48 hours of phototherapy in the two groups Time (hour) Group n Mean(SD * ) total bilirubin (mg/dl) \n p - value \n 24 \n \n Clofibrate(I) \n 30 13.72 (1.56) <0.001 \n Control (II) \n 30 15.30 (1.44) \n 48 \n \n Clofibrate(I) \n 30 9.5 (0.56) <0.001 \n Control (II) \n 30 12.6 (1.44) * Standard Deviation The duration of phototherapy was significantly shorter in clofibrate group (P<0.001). All neonates in group I needed phototherapy still after 72 hours but in group II 24 (80%) neonates had to receive phototherapy for 72 and 6 (20%) for 96 hours. Serum bilirubin levels after physician's examining were measured at beginning and 24, 48 hours after phototherapy. Bilirubin of both groups was measured in the same laboratory. On serial daily examination during phototherapy and up to 2 days after that no problems or side effects were observed. Also for a period of two months no complication was detected.\n\nBODY.DISCUSSION:\nIn this clinical trial study we determined the effect of lower dose (50 mg/kg) of clofibrate on neonatal hyperbilirubinemia in term neonates under home phototherapy. In the present study we demonstrated that in clofibrate group there was lower tsb after 24 and 48 hours home phototherapy compared with control group. Also the neonates in group I needed to receive shorter phototherapy than those in group II. The neonatal hyperbilirubinemia is the most common disease in neonatal period. Although there are advantages of home phototherapy, several potential complications may occur with its use. At present there is no safe drug for treatment of neonatal icterus and shortening of phototherapy time. The effect of numerous drugs on bilirubin metabolism and reducing hyperbilirubinemia has been identified. Metalloporphyrins and d-penicillamine act by inhibition of heme oxygenase, agar and charcoal by decreasing entrohepatic circulation. The clofibrate and phenobarbital are potent inducers of microsomal enzymes that increase bilirubin conjugation and excretion[13]. Clofibrate like phenobarbital is a hepatic bilirubin metabolism inducer, in addition causes 100% increase of hepatic bilirubin clearance within 6 hours with no drowsiness effect in contrast to the latter. Clofibrate when used as an antilipidemic agent in adults, has some side effects such as nausea, gastrointestinal disturbance, vomiting and loose stools[14]. Other possible complications include cramps, fatigue, pruritus and alopecia[14]. None of these side effects were reported in neonates with a single dose of clofibrate[15, 16]. Like phenobarbital, clofibrate increases bilirubin conjugation and excretion and is a better enhancer of glucuronosyl transferase induction causing 100% increase of hepatic bilirubin clearance within 6 hours[17], sooner than phenobarbital[13]. Phenobarbital has a long half life and its effect on severe jaundice is questionable. Phenobarbital also causes drowsiness in neonates and may slow down the oxidation of bilirubin in the brain leading to worse bilirubin toxicity[18]. Although we detected in our studies no side effects in infants with a single dose of clofibrate, determination of minimum and effective dose would be important. This study demonstrated that a single dose of clofibrate (50mg/kg) significantly reduces indirect hyperbilirubinemia in healthy breastfed term newborns with home-based phototherapy. This result is similar to the studies of mohammadzadeh et al and badeli et al, noting that they used 100 mg/kg of the drug in hospitalized patients. Our study demonstrated that lower dose of clofibrate is effective on neonatal hyperbili-rubinemia receiving home phototherapy as well. Glabilan[17] used clofibrate in the treatment of early jaundice in full term neonates. He found significant reduction in bilirubinemia at 16 hours and decrease in the intensity and duration of jaundice and also phototherapy requirement. Lindenbaum showed significant reduction 16 hours after treatment with clofibrate in TSB of 47 neonates[19]. Other studies in Iran and France confirmed the beneficial effect of clofibrate for reducing of TSB in babies with no risk factor for hemolysis[20]. In conformity with those previous studies, we did not detect any untoward complications during therapy and after two months follow up. Twenty mothers discontinued home phototherapy because of their own concern or others' comments; they were excluded of the study.\n\nBODY.CONCLUSION:\nThis study suggests that a lower dose of clofibrate is effective in outpatients with neonatal hyperbilirubinemia under home phototherapy. Similar to other studies we found no complications. Further studies are needed for approved routine use of this drug in non-hemolytic hyperbilirubinemia of healthy term inewborns as outpatients.\n\n**Question:** Compared to control group what was the result of single dose of 50 mg/kg clofibrate on The duration of phototherapy?\n\nPlease output \"significantly increase\", \"no significant difference\" or \"significantly decrease\" corresponding to the correct option. Be as accurate as possible.\n"
}
] |
[
{
"role": "assistant",
"content": "**Answer:** significantly decreased\n"
}
] |
significantly decreased
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.